var/home/core/zuul-output/0000755000175000017500000000000015134071302014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134075567015506 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000244271115134075525020270 0ustar corecoreU{pikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfhuB?p6b}Wߟ/nm͊wqɻlOzN_S~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@ʖ7w+D"ޮrFg4b13`Cfw&:ɴ@=yN,z'vǛVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9Ӌ2+7 s'Ϩ\WW&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~0Jm>*<13䋥?K- oζ_E'c@e=Vn)h\\lwCzDiQJxTsL] ,=ߛoU䭸5n|X&pNz7l9HGAr Mme)M,O!X!q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~,e 1׶/5άRIoY;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىbſ~I=o嚲W9ȝQEkT/*BR =v*.h4(^&-Wg̫b]OB`i=_Z;57xh^J">CMMQQ؏*ΧL ߁NPi?$;g&立q^-:}KA8Nnn6C;XHK:lL4Aْ .vqHP"P.dTrcD ,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)U⚮F_/* ˅"\j9(*<[Ǧ4 V mD}q2/Q'Qgzy&}ͥC,a4A{C U =T}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|֗&̴+s~x?53!}~Z[F)RH?uvͪ Ǐ6v 6)уGہwQ5$qw~J0<ƐM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*V֪v3lVx5X@O(jPcoC]cNdusmUSTY|":", 1BiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~`7KZqի8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAukYEPnO'nt3vmL=d]' =MS2[3(/hoj$=Zm Mlh>P]}p TRi*KsmM+1 믡`2\fC$Mj-Kp`zbbq$Igǽgr&P29LcIIGAɐ`P-\:BPS`xiP(/T)#iax[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1yovE^lKKiw+ڍ[I?TPht /˿e?n]FhNU˿oۂ6C9C7sn,kje*;iΓ^ۃn󨔖I@[ tWv Fyw9J֥WmN^<.eܢMρ'JÖŢո%gQ=p2YaI"&ư%# yC\G=~j{M[nMcow&|||x:k/.EoV%#?%W۱`F=>l |fͨ3|'ҴgqmubIfp$H4x:bl=pd9YfAMpIrv̡}XIթJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& { Z;#es=oi_)8m1`8 X]Տ޻(*exBaEW :bT:>%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#Ίх{211 VSxSew{~o}Lvr7/TZK̡O6rLmՕa1PIFĀ":tu^}.&R*!^pHPQuSVO$.KMb.:DK>WtWǭKv4@Va3"a`R@gbu%_J5ZhȄ?lm$K/$s_. WM]̍"W%`lO2-"ew@E=!|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧc/"ɭex^k$# $V :]PGszy" 뭗R==9!nKErHc1FYbQ F;v?ob-ڈFalG*rEX}HAP'Hҷ$qM9(AHx!AF 26qxCdP!NZgҽ9l*(H Žڒ;̼|%D Ɖ`Pj . ֈ,ixp`ttOKBDޙ''aLA2s0(G2E<I:xsB.ȼ*d42I:<ŋu#~us{dW<2~sQ37.&lOľu74c?MՏړ@ -N*CB=i3,qjGkUտu6k Cb8hs&sM@-=X(i7=@He%ISd$&iA|i MiʏݸT{r[j顒x.Ƞ"m@Hy_I )j|s#RGI!dTKL&4K>#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{S y޹na4p9/B@Dvܫs/f֚Znϻ-8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2O⠪R/r| w,?VMqܙ7'qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z] +RlWDžuF7LFֆM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃F<[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6VnR0h& 0Z|ђ8 CGV[4xIIWN?Yt>lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O8VH5uH)28 Ǿ-R9~ +#e;U6]aD6Xzqd5y n';)VKL]O@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ'`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;LҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [g5_3ERQjF;/-R9~ {^'G puȇ S:tŴvVK.ԫiEj_+sz'ʀ*E%NM/L(k `poS R0\jۇb>!}r %A%`;MxB[CzRĹr\8czD ȹ2NKLjp*: 'Sa-ɱL˿ˍ-s:ViI[/SE unMQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs"*SL.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|7IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁64ĤjI s>U^k6v읨*:}9V|MX!8j0"t \5Ȕa|)v"Tqw?E8V 7z[v_}OO-DcĥF7FX%2@KɴH/=sۄ`gvRqcf:|XUZ#O\_JK\?}3tj>YSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28bI,ti \Τƌ ]穇`8[ ضزEM_UA| m' L,C{"./Ep.h>hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQj0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?cr$dts` P}KNf@r3 Zj/}I+ϟYSUkTa>x!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG7QIc(<ǩJi lc*n;Ymw_ț{SzY lv[, Jl]$QՃ짿$;NGmݨ]lS[7pHF\'Hb "HIRuGI X&U"'[Jd f om$MbJ"xW,Z\sƏUX< xGET)T"rqQz)V1jVMY՛VWqVܞ$_~}$o"kGI&2h-Ń2R,&T?njz\Eu1ލ?VUn,Qf2(lrn7v0utU#Gc^p񌇣dzf?5-0-zBxyOr.~اEr5N%q*/*dp!{%eXKVi?uÉh0>7,Zeύ/|b[Bֿ;'T~S)e!Oa='=Oay{Nzɹ)xʵex&FA|fgߋ(q2X1[it/QFM!CrED|6ڦ}6϶?M".k%E3[tFቊ$޴<>0]۷@,KG"kGni^ٿDH~]yaǪĖDˆY~VD}'\d2B["97?ybɚŒW$g~lGġz}„PBntF,mI>!̢*i`b?QJFSXJl{ }g5(J]*ʟ̊ YLq1Q2]&lR/ĤO.(aMBDu%cUe:9bz9\FլfE5kG RoIU\ ʛWTgX bnj>Y:VdRM~0ϺaV%zO'%iXL.ALl7u% Nՙ_iZ =5_P =O26 -#tr< ŊCM Χb!Jqڤu鸽l3mb^2.kx _x*@0$X>r6[&/úlTToW"li#O]ZaO AW`i8^x}aƏ$d![ `O콨/eyTv>#ꟲGʢ㏵)6]j q~ld^"?Wu3Wϒ ;s@skw9K\VsPSɱwW`_f8qػ/_{^9mTD'gqq=zy%ٟQ >ixoW,+RTt~^ԎO>up0("c4x޽:o\ւ rk_z},:G2u$q͂t \Pۛ.թe},97ۣ^|cnXD5`wWZQSL%DESػ['}c::,NW_MQݽ1X߳LnDp}8]~xwyoPP|DֲY\%@8O |f ("JDoKv  (:ukI աn|[ʦ8YQ-$*O,M8Sdo~`5,e* @98.N2u)x"80]g|=b}s1W)&}0*ux9F #@ P*?0DZGLl)rQoa**I-56Ekf5~s!m }9KѪ0KWqw a)޽Lhsxpa}xA܁R]p8Gu9v)Qx\s7#:aBl^)ͻ/'8W9m;@Q L 4M^XQ%n.L6(YbQ}ȺigD&YRX&NtsؽSe){OĸBʋÃNF0-jၝȲidvuK,mW`JТh. gN^v/(e40PY >fSlk q$hcf>4<)W.m=W Ãe҄40 8R ʀo1 FiPi:QOX0fzaZEcZjm9kL/ؒT㰬Łi?Uqt?Z%u[}Ya=ˤ )W 5 E.cW<`eO[꫼GښoϢRi{ploK@ols ζe*X#rA 3eSV3xfE'|W=tQ`T4FW`=| BHDͰ-gVT èNv|vmC8qۦ&Jm\5o`B@lnaB i.l+؊JY!m[펟r,H9mG5d ٞbræŬ/`Rj4L`CgYևH<~jPil\4'ZS:=/a_[K$v-: )K2,ڹ[LpS .Y^%{=Txێ˳bwGYWIma0˂CtJVM+m$\b-AB[`;sTzEeb}hi K8KVJصuZjIXu5e#R_+F ӌbwM%.(g(Zr},)h_Mp-KQ%L9º0fў,;hx8ْ-%8LA&jnkS1:d0b3ޯHt>gIrq.TKhn`FSCث܂l6bp@`mATmx,e"pv"Y5 ̣nZgZ\[RgO?mֽ\+Qm˺=ۊTqkæxE)ڄKe4JB@Q V&y62EZ1F6iX-6|CҐi?8 <=m6HƁK37K1,֌ MG/><).BDpc7,m)1ڢԲmjmCKRE)S1sW6 u$C˘==Ϩ'rI mةګ8[ю$>:wcnSpVMݢzQE'G c9z:}xa׭nO<\dTKwjLm+㹔mF9uCqԶˆԎo)ˍBljqb1ŹLTWR쮽JCڝAXZ^iꩭVht'[QJ b:vM bY mZwOm2d^o#s] Q+ VB&Z&xxyNlR&Ku)[1{r?-d&h7׮u=)ڲRA[GZn.u1ѷtnqxZ20VW}uXߪP@ll;S/dSCS[(Yr,vC-4Q=BD M#D*aU&v4joA[R|ILOPj28:Szt~~z;ppSZҎBW Q9cXZ Rqc܅eqGr$xTG5Hh776hm^b!2QTy܏SӪ;卯,1DݧfTMzFӳ +b9˸׃HUe!S9Dqha|*U$ՖuZG^_h\W.3 1]UϚݯT[6;lana"Am$ d{s !'=jDo茐G4r;!'l=i;DbŕU&rkɞ%t'f؛ E\=t:B1tklُbZ& x1޴"K7  ٓ":}=N(5irjEM۳ZԴw<5kZ_m\$>=xN LLU,6)R)Rod))Fm knDMDoсZ,KP˶ H(s['?ųn`Rg"Ibw r#S۴3uS۷z!TG$* qORЉnQܒEΞ6ԟżd bx,qI6uw;|RW3^?L(E-)sn ?ɠnuh|Lfl./ۈݻ̈[̈G2r;eƩ^F$/Ub f[A>;ljˍ ;h~r_.]{Z|Uz1beAmb_|7¨Z=j `o^e2AQ.b4"$P7U/$_O'URU4| k~Q_'S'T`㓛iIU6e}vSS1x`n\CjJ>;RZ~-/F,NvVibhq?@,H=.!n}#ђ \.o!xwh˜mPnM` xPG_0DeGC꿴@0e>@AQ Dy@ic5>KXĔJ Y N:A@ ihLs-** nd 1wEtr =F-9c3Wu$H#LG{u@Yh 0®!+"XeǘŖuVP/v7XbDR Bp9 DWru"4D,.gH—; p[iP\ũȀBndj( MET G[ilxkW/GddD@֝:zz;x̳HO8gŜD&-Rr}:|\!n#-2 µd ^`֑t"ಮ=c ^ N9Ot!.>UD;N kn r(DbrĔ޲ӆ,5@veCnZd^n+:K:B`Îwbעw< ;Ec6(m'm9,EٜE\t#n;DBd.cGP, ;g,N,,N UԹ5N-w>G\t"DSwȩRVѕLZ, swT;-&a碗9qS=dHKW]'gz5pƻ63Wt%ؔ3!YHbZD0rsP9kɃ}X=^'RMGľL ޸"UȤI9y6, Jq=r^:3D0B#C4B8XX*}O,IA& tzPȤ4`NryT7V}#,]v3ID :z q '_2M8@LTj*Q[ <#Fh(}BD*5Y4cUk1uuxDj&9~QJUi6 m(c/2a$ouZT ĜrGyMiHN'{eyagz#̏]ZYmgiH G.b}=adszs&8@F9q4‚pMvO0Os3Qr8KzCPY:XʺI?f[7we\c7=dMU6gx *}3f/l_NT 5fLq4LJ(`E7,N Ol6 hN}!\QNJyeU.$Ã)q>}7ƈFu9d\vwZUJ荑+6>[vǏߡDT9..wO͑ڢH7&]h1wZ@eJ2nrCGI7,a8 l.-A)7iԵ2"`zY<9= F:tH?K^'n0s ˯WY""zsi=JuEIS" Q0lȆPaِryiD7jŊS-8 e|{i%n.ULe1Crc0j2<7`$4u1 )':#V#^))=,лV߇ w넁0U&*غBjQpN.]W@c,]LkZ͢3536'0Y/ǎYuE.wNa,_IVV90qɇ,/njX^:KtV.w 0A~YV7xN2vb0"a*=Ot#3dTƷF/N fw_wVy~K1ʦaT^K=xpy]Bߠ"])Q# CF^> \? <%r")09bM#a؋" #7Ig*f)i=?xO$#O5fDҼ1@LWa.Tt -TsR+44J]8]ԶЏ<*U.\}q`6X[4 ^m!z2ǫ[kj)XÂltz5xMk1ʟRkj!,C3YkQU| ¬q_"?S^mѡ ʹ3ET 5e2J(f"y65,K8dqêѾeȹG 4Adf6"Y8e1(NͫTfb{^ X ˁI=vZx5xUIW0of=Ò,8߳;o_}=vƩ~ۋil۳ {Yf*&=س.+@^"@mU^=m8PhhćwgrRNKJ@oʺjh]3 XƖw._̙IW8fP -Cz,~I @>`~E^T,/ y]%oM A٪lA悲 ʶm"&e)ۢQ[m۬lv۰6MԶݺq7i]*-@Xi-% n u@P*@Pi- m @PUPA&l.h4A- 64\4@psAç n)hѪF =MhKA]%3 QB*HCL ixLWz{F_}ygl\ KՃUQ7EE9SZ1,0"<Ge^)f|-x)} 쑙C_cr >}<{8: W4KI^}珠yNf 0:'|<&H+TA.8 !iU|^0@vYڧA?O<ßUQ"CDZA+(ʹi"['N6j !H=S:`UxV7NGpiV56UaG/*n=(ޥt,.@s 'ldk֑1cxX_̲PPC>H&pd<cLqp&1Y5\ψB$bQsPLtJo sG(TgOPH_*F@m!Vd[ _90/@ғcF w>08'30L$L;+%!1իG50셌ӻ;>(H>tQY,`f4@\{{,I0}G_j>Ӗ:%p+7U yd8H<*Q)C,.uMLNb\oꕽZYb͍ݷuՖ[T835ڴ+^|?kO/b gJۦe\ҭp܋5,Mu qҪ蕆Q.{=ѯ.}ϋYdAyupuiZϞ!i%L!702|dӺ[cYy=N PQR;,F>k,*n2ZAEHӪģ<$ZxH3ncQ 5AC#% i h]MEf~])bp†^:3൰܇F`S@5 w߷ˬx-ganb\YFgp|?a%cU3V̖t6sixFKcUʄ h!sjCjUYr0UÐrtq}|:ژ"4raǴNp\0>g yI}sW!8%TsN!Φ # n.OZ C+A;)/ r9h\\'|r$Yne>KDF˹P1W#* ?=ڻ72=,͑<+y LWﻬsTr$ySټEy FƢSae 5TPf5R'SM?J˲ʊLEN6Fv lutsrHɽJ;HWL0>#9ˊWF7RrpVoɍRIF3䝎| Zw-Q]Yd;2SG5O1ķa ,MÔÄ]5dXOK055ʋ ?jc(/Kޕ$g:C<,X`== )Ejl-odE;z#_lICU"3/nu ިٮ:KA{4}o%뫿ÿ'RwS߾;$ze\?';|Bsg|8ePNE?r?>kq5Mޢ op%֧k}]%NX\{[Jޠx7+kBW%JMn~ٌJXqDtֹ g܀f48y+G8'I9(?|z*f5A׳/{w;CbN6qpmÇu\𷶬%iU6?Ǘ?Aim7Q!3uԗ- ybdpt]og&5o(bĄO#=Q;EC鵉/0>>MAaϴ8K/kTcyzrJUrɶihߝ֤lfx"5Wrְ̘g10ڀ#OvKCqbMw"`+KUYfl me380NU M]A>EMLE 7ܩo]F ,(M,~G}㓠9|@3"KHu-,z .&\41bӴ(lz"Ќ=εuLtfI)D YʠZo5IU/4ƞlbUIDV}u`Af'd[*D.b0F7_ sz5@SPIIVyL+X ٳVRZf[ c$ Yw !+-`ก-rM䖳MHe4u[InyrFJSxLs N%-8"4@zp F->_ gLP݂J&L0`'y)&WI'(uxؐh8c T%p8b$zmޓYEGQQ{Hs1zY3}-5;<.%,VBp9 'D:I,?ܲt7_/$xVV`>Iuֵ{IZ˱I5+ :TV) Z "Èa;FN 9+pÝMEYEċV)*7*{γKZktZM!1$FرCQ|r{J#cB$G_oʝ!QK.mo-pq wybqZf[ D%6$ktly S omUCy+ֻJ CewY*ˎ)ZAYK=߲ςŦ"i4Φ`$ؔgr#Q,E?v<^/o$860e?GJGMDpHckiL3galb)WâB5CVNo/$8z1fp5+ව9t,PrFG I&Mܱ9hҊ&-Da3ۓ~!p 1RLVbB 1bkI h%! ^E M],yRKG<[q}Hb}y햨9, X  TS{C̅)PKX nhAVL#f`Z"({MƴՖ`B'2e#Ļgnc;+> |o2 cˤRaJcֹh^Ĕ6nȾX$>oZޫ` .p1])xl1IE%o1M(gK6pY.QO nj+SRI'j9߼|$b@ܵP)VU$9īBptQ u^j^Q_+FOG-ll1 7Ր8FTD}M*,htZݚM&q,h_6Ρӯiq/ګ  UW;RLnмd(VyLGaY~P`Pټth=dH̴bo6tŶ[;veoM  5fЏ*Eζ Fҩ{# nUb"['r I!%-+B1bcw_穢Wi!QzchP^:}1ة/k2if}b LZlsI)C5/ ghA{wܖGTIFfx eު *I4ܼpQͅ!INVXZ-O݊Ϸ-|/}\-Kt&dC%1z T1DY-9yrFI;~uBחw vWjA)Ij[X)ml qk>ȖEj?ݢߚ&EbݩӔK[unMNud! ~V#чTfɥ)=p7`4G։EtK_36at2)<+9JczOB36IkT< p!6F?6KoGj3L `XZ`!_I}&1 Γ%X[;XS,Ny #F ܽl)!<C|ɍhw. |g&3m<>Ecߟ^s#qtM!ӋH2ٓ^jw^j,Of{߆<- sQçrv:ͶŒER5iuM#+nG{^431L/gOmwĿn0CXq!DΉ!Kq& ,=t:RL]ǓFJb$&D,R^à8;-Ί`0oio!Oil"7G6io`U-dR`MUd}#=>|L x'&1'Ts8z>xcW*FY5J2ݫ)ƪ ~`<ғFH:MXLw$X!b iI 4[pSކ遷p_7!r0Gl 9FYϫй=Z7Oۺ+ʻ>N \}Yod_J5l); l *,*p6}* NĂ`$el 奫Af Y5UqH":!꫞ǴVOs!Ϫ<#h⹥ͳv8 o{[' )ϋ?#j(dshK:x&JL9R2&>y"o/ɋt ^>8lk>|O TZ79i8MHbym$kʆ͢`QAxw [ҏF#xiUYqiSohPQwèK7h>:spsK_Yҵ`Ґirc^ Ud Qa-L\3YT?VlI@/VI7 tY˒tBX8 bf1b 쀝 T3 ~0qҶ#plL nt!'E9/GpcAB< ƓqT%EdXFw[1Jc[RkGH:0u(aZn6k8/]61.BgQ]J;YvA1.BAE+sNpl4H:ݘ*.w?Swn@>K>8K"=KM [j& 86\[>qRK<<9!}0?C!vx@80;m/R%4Ք (T }{ۊ4Wѱr4G&PD*e#Wʩ \Cד(? JIKMq<Sb-4n4ۤ|4K?h9U6'!gw^ޣeŚy;e4Nvӫ\T)ꥄz> 5u%M(:t1c ah{?I◠a$6c^Ti" ؛3bjE֊)#8E"NK`T)=@ݝ;M~BxF"?^W#~@|/^Ԑ#Ug!Ge1 ƲNg_j@v\Q6Fy+K񅭹}ugO@Nn }.%x4zoCWF-{ 1W1oLL*sɛBIdl7j_= `fh`=_L^8CEUɇ)AFd=~yW]ubR {шqCyrٛ|{7u.T\F޸ę)( pDeLa_ 66tff>vr^z ȹat+FVXQQ7of@O(4{="IZ=k],PςgѠ8~8-x;Nɦe\Fn TQ~<Ќo;gwftҳ❕e׳6ӕ pUq~k|.Y \\= a8ō{wS6k!GP%1TUgJX}kñd] e'?s_xr|j"(lmtdfWpQH1ggɏtw _+'vC& -ٮB5OFd9'1Uy9XsV^lRg>|d g>At1:{^*y3A4øEfQ`ўy(m7iEe2JKDqdA$Q1n1PMKKz-M1YNsk F]#&S3SrSA1\0\Ť~?bԑb@J)";@A)YM;@6 MPZxV0&MJ+*ڰ>#Tm`Dv :hѠ@Hg[Z LM5H4io2)Ѭ"P`qQւ ݦ(KLc-h45QA"&!4ч0.jzc,~6E`[4fxAUO%o6,Ip"*فsSh0c4Iϛ',篒OY|Le)#I峃) 14%.n$/=+G71ke`wwaHzNȖk;r=x󧷓e?߻A/D͘:,1uIߤՅo II: ,%RaeWQE;VjrX.[+|{ZWf,SlG'{v 2[bY!((^ Tx.hx9syD+E-JbܚsXq8H<"y~$iZ ]Om7xS9c9Fֈ}Ny *s7؝^Zz?].w2~=I'_ęuDtrc",HL0o)vR$!o4K)4B}nym[nEťcXcHVb8~ap/sQ~}Oj$Pnow4$BB{/9cwF we#lw ,9z5~#A"*4pꭶw>H9_Z7V|T ԓiJls1)BT<>i>譨M߿ ')d~qŬ- QT\Cm.]ÎqZquAj hi&o0= ]6oŰ]ڠ4n6hڣ .~pIHBAcR55 ORl!Jc3l<.غph>;MZћڠU JE P Fn39ưl3YvD(&ˤ_Lo]YvXoZ`:x"o7aY)M&m~t^=I̽{{ ЋwoXtT^^ WϫK=8ct[@cWvd ]9J`ϕDJ`}eFs5=f mQ:}~aLͅMʧ'!1.#aąԛs,h:{ sxl32(M;&o6u*ldIn3V1hRZŵ;wSuMn7|ɔ3ql+өqH bbS+.ؤ&|g S˄F+T2h$H⠕ׂVf2RcLj^Z*XhsJ%7c\@/ڒfP:jªW465ΈAg6Q`FZgDq{Qzi'٪/焦&vאR _WЂ9'ܰ $)`sJCWUPJNB8PcSJ(CC2T "CP W[פ s5 $MEPTl8b0" #ujYA1{{[l1£TADb˘%$xC2@q+o,5gXc@1) %W2(e^_fjS 1| TqwzCc`6#Xg>q{P6A SbP+B N饦LiB(kOr!e$/n3({,G2M"baPZ)8(NPR㚀`X9#X?{6 aC(!\n`d$s \|4mmd!%'AV5)-[%Kj5]]LZts I P8uDFN xjT2rzPkJO)_-I\ !#*B(Iի @OԊR},OHL$9J9*'2NhNtq m/1/Yle|ϲ\E:yJ+RD,T )4gFeIυ zJ9*a$U&g\#٨=b=Z`&\V`?)G"Ac00Kz4Lp8ƪ^5!뻫J,ݓ_,48 jNf&ӳ]ܚԁ"*g])wy<|pjGĎX,8i 9J KH86ThPm%p*McQ"#&0 u15*-ٿ 0wFuZAqqrukDА#x)%|04j>X}E~AO"܍]Yq/L!r,R {\`=&nht@N^= fC/{~~Mv WjgX}np͞hdÍRP>f$̀dWy%-+jlA/G6 #(8)$1{ AwG$$ G#."B-1qjw5њ\Hr { dNGR8F[,v2Zt xV]k*+} {:.0?%{W2iD`0-•{>fDkAW nn GـY'[*NZz.O|@U|t]$a^qrj,n>huĵVsb6y0B (SJGpuN_MG}KKAW5o4( (@g+'H)aq^=3 ^^?%VkO|}Vc]`*гx6JwAcmv]+JXU LS3m'bdbd =˶pEUˬLC_\`J.o`ZɚFR@bqT2ݸT鵔qcUhfNOjk5;?+/q~㵎8pj1u7go+uòТk(F땳f@Wq>"^?]Lf5ukdں+ t˃µK\V,AQt% vZG@V7u?);6. 7DK GoR#X$/ptKi-[}k8}5{T>i8g e\L܁?U v< :<׺͓uMrX;pe SD+U=%v{ѝ;Mnp~cl͢;0uÒ; 7,:ưhfh޽nZ4۴nA@'`:Av'OOཌྷzJ??=;~idl'ke_n JYpT鲉KEMF~@^&[yIx[oj_vAZ E1_Q1W 6'2ڱڞP cL]^ ;@eqҧx1rJ5sihX~  a8#Sv邞N05{O=u[*~ 7Pc.9؜*3E 3Fgӕ_;ЍxZ5(Yo0O| \`+-3 Z? ggs  *QkZqNƴ(.4<VPD<p̚hG5f\J͆0t5qQe\M+`Ǔ Szcm9遭uUBQkglU=7餎]Fg'|hKi  _a26o[}=rZJ5"ҕE r%IDڍyd>Aax[/],Ɵy-_'} Ѣx_5c.Y|U /sՄmA;Z;uC&}r"eU<ͦEDdN  kURFIJi| L˳ |]a`6q# .&6WX QyA\@XP–pFAXZYP22"+k _հ _A?W`+8aT٪g?bvz`Ņ$PGjw/eFa/+ZoDN>Tc-.Z.Q/_c} /?vA(M,igX4W?N>;ъͦ_XUTQŷy!kL-?"`ru r5(rwKl@q얠lԇ.%{cZ1 _v?8m7nOjl@+e Z^Տ:ճ,T$ a^ƽ*<94E Fge^5% xet5fC[^ |;HQ\,n;v4Ȇ1jI83X(8ICcX$e">b|HWBG[zkPVk`PfNQ"!6elaMJ'ڙ~Pҽk%[Zl 1} pp Jhs"*QXei„нxUIc^]Aݾ}G+'beVu7 p$ls~.D#Cɰ/b& )'}K&v~^$zEfd]wwH@'iXiAZa+ 4%vw|SBL^40aw^/mYXR&N%%e>;|\|ǰ ܏<_JG|PHWtI?!&ZoV.o,ZB, ۋP\7v">TƣȎ}. 0JxK0mMW wWI z7J3e2g^fg47qu 8n\ԧ̺OEQI\B%&2|!Ax4Gy/>ajHE|_`z"FBa'O5SZi3m^>fP'+5 W^Nj}(=Nâj>Of?" mdL(aL XH9eSY|TnĝLoc{jzu/|]>RE,TW[neͽpՇoz r M|I$('XQv_0(5wΙ>߰(x>.]y\n7tq %O]-Йk3MҗYtzΟ Sw9,R H%qVgjhE|lBDЈf)CzQp'uP?si݊r5oS8M&wN绦r`ٔI귘bLī\_+ רy ¥ X6 rx3S_Zؿ2djNMYU/Y^*oѸZlUU F vzi(IuU!y8Yl} Zn-S C9z~5(ޢoMLTgE ,¾m eŭ?:Z!Pc4L/hՁ jTmj$Wa5,\^ckm{A{u3k۷a*~l͆́^^f(jMNK.Y;ϝ$2m$ɳN/XGI`EN -q II! L!"Km*Rk:˨O*ɭ5g3h;a0VdžV뤥&9m0pC1Ěakakh KLFk$@I))ϋ޴GuT^7r=,uͼfky!'e)fz{>2sX}|82#1\+3zNXqJLcS&û:]x4/_n$7Rlc<VQ8->q( Lpdȅ8eZj:QÛy/y}Y MoU⦲KRX~c9>%ӴDڠ]h@h |zPL@DDcZ`%O݄,hGPj!.;μg=V?BĈ9ڃ2ɱm3iF,fR%Z%w]h|~|x4c X ?@ҕ3P nQKU kU;V;c !$,BTp~yRq^FeR+Yөb&[2='RHNP\Az)hQK]K"0%F ,a#u%hҘ(򥪵j5DTp SF)!̉l 2^U1P9|5MKEr( ғYFM0ECȗ*`R<ZD0In0f@8vk_P%2xǶCSY/ĸ"÷cleq\T-pG9D r qKUjޏx 䵃_H7piI`p gia\ŽdBƫ1<*Ǒ/T6oNj H&)q^%)Y VO0|fc]?;_x!1T'A#~Ȼ>x|K듲Q WUgL;)}˿v~ȖHULphB@W#'4`>^0K!|gaTK!u|`͆HHP2ͲqoY:JU^%oT2m7z>Qo3~^wFF׵\:?I  Z}a&2AGu8퇌>AO#cb}D@Ӊvja'_0@Hϼ߼-6^]ړ$ga:"ڮfK{p".CXiveVd(q`9UwAG["J/q^3823Wa[U[=Wqm{{x}eo C.ᩇ"_j]ԙPc7t̆fM%mי#1Nِ J$,Gx~VώLILu(%W;47-T&gŋ2Xn?cȗjb L Dmщg=FYZMO XL^ǂ?r[=qĔ濱` f 5.ydob ~CR݊RJ)nfc -Z'X9 jǜ,-p2ݤB J,Ȑ[u5m埽`F*)I7݆x:z'FϘ Bۉ%%yIRaXfE:N218|R9]ҏ&_Hϖy&b(|yߣ x QPKU^1=fv| TČj;[}& 6GlZ 6vhSHI '!<K\8! BZF[{ؘWh L)_\!,*0,ZtRV}匛uq;-VڿJmrVQl L؊UI)s/=ߖ :_̨ DS M)ytC:M1ȗ\T~8,Oh+vkh6 Z ~ᔠGڔc4esD:ω] ӦH ؏]VîxRWCUIτzϺRgN?ͱ(Ha ̪\ |*$͢|E/Uy~uæl1/..&&/@7|hk {ǿgm@DdwgsCy0 uz"_fG7XkiE;IȄ-l@Gz Zl;ß0bßګb쫗:vsC(wnL@˾oTvOszA xP0p&uuC]B,K+| ?_.Ɛ/ժ!Ky -w]~"1+Њii"3ƯŬGTzrXB+zp~xd􄌖3 i-Xf/Ak٪0*tTnB@˩;)֫!̈P3&d4oN#LRU-֦1<%XsVۦ. 8 ;lWŢP(ar40Kߥ0KLsK؋QcD])qHdrڬnhјП$g`{`&d4 6fk*F;N\hmn }0ҼQ-573F!s7gӐ%%g%+ї!x KPKU.i]K_:~r PMsbyԈM]c!%"8i#箛:qƸ/bP(zݢ"|o\Ū79whtH55sL g=g猰'd4U1͛ K1ǎ94ET(Ƃ.p##V}dR5WL]@ye+M[!^ED &hT>_㓌TcX"_rp1Ȩƀoi G5w:u'Q )ʪHé}>!K PX W&Sg` xzN˲ujB$7DCC"_~ޤ[5C<  ë8]dn3ڏP27t}mlKxdfC,p~ K RrzkܼԒޮTU =y^ #znjh/}L4X j"uG ]30{ `9^lv>+./6 b.Gb  b*~Xv#@eY(Zt1ox`ád?S(@ IjZ#ȃJR*&@"_{[/~l7{m& ~a{'z_{Yf"pFS7RB ʜl 2;軴<]Z(*Xҍ 8o[+L93'D=2'kN=gL@9NTqc @^2o9ZZzA-bpzєQ$]|gs |`MI ]2TQ9 Иȗ9ۇ;YĵL{N8v/|Q@BaF*K»c#'`̴ uAqџÓET V_qy<&Tj3 hI]u`K<DqM[!WF9#[tCdbu.Ix4$&p$O*DД1Q://.f@8JUl }3NB؁  p񆖴|sO0-%iY|Ңr|XKyucThyY'|swF7 4m0ׁ 's-F"$K~?͌^fIS N~4s2Wo˹'FjRW^^B |^jV@Ho>vٕw_k84PZ_S.cU -ZWja jnKsŗZ? |ߴWD m :'d|.(ɃҔo1 Fgia\>"0KM놥(zV: %CS4[_/7tyG 6NS/^XYgyj+wP:QK_GٸWylӸCbV,%>㆑9b)ma9먔93zNw“|q22ZNf.Pӥ pncPӉޱmx6s-z(o=P Fo30 D2mWN r+>=Y+Ƃ19ԆL'tpH;s`D40s9$7(C"_r%ք=|?ݬZTw]CbΥw PjET={}YGoWѾ"C>nߪ͎º6'ۋ1ku(*MMA:+qV1?, |V`ZL38|tNȗȶ&!"Iyսn^Jԭn(nT^ږp\tKx8_n &4 `"7<{8!ݩڳ^PCgP׮#۽zPV}o>l3)?)\ QQ"% :|sσ}hMD&dhRP ^Ըm`&d ]m6+F〻.7 %|MLmnݲDzgfHɖe[vr]4GTq9$/ sXVeYkПUILhnAd9ˣ%:!($N#gN.kIb)K"/.L]vV8]kov>>}SUr|XChTLtћ{Q0 !(4'IgcM"eNg0ew $&к5ObY<|ʭV+ U{wfϒ8D>R'vogf;)~M4V6>4Y4r7KɃKsF-N|%)J(qwOVq]yK}ccx =?Hj380l5wy(ٵuIf&Jߞ51 qɅ !RN 7)p/J{ɾ['#Fn^#y斱 )-/F3Hq&%.Ru1.Lq~huA!\z2`~NUdy?Ώ Y'-N,JxPF8IRFa%㴃_ d\{o_A WT4";\P nF*朙͊X4!nyAFho2K'Hv;bL(؋l r 3/L| DqƄM9L%*AJ1ԘG{Jl^6n9~im]K+.\@WU;ԗ.3?}}+ 7}bU.g/ycԥvG8ip.t~9"5hhnǾ }a~Dc c8b2a/h#fg Յ}e7~3xC׏hgfP]k뵒+[Acإ ?:x|dS^&@;3~A dSjd@N_cC#;x|1Q huL,BIOi!)3k>"{@"klF1c};:NnDT0 μL4$8NehdT\1V9юR1]ybЇ|6ɶn <ak=ݺomOዴIL޽8.F{k,SfΏX w/8`*ev3H؊:Vr5ى @uvAL_Y5F֧a<9a}y<@ۿ\Xt\g+ThX-#sX6~aq`z8vUS>#n@upyuሔ=qWT(DžC U09a"!QD$V̋ 7Bzd 0Yl8bP į 4N:dŮC2%'J.0gQuyFQ{Kof\s( _'*¾4y̼lm"}lqaHȀ Zvpx f+Xkd0!'>Me"qD;'Y`@9.0D6B@v;Veս?k'3ZLG˙ ",|MV3E?~֎ߕ,Ǧ:rt[i_ >9sR*TO3[3aj~|z?ɻQEs;/*.|dٚ.7~rkƭ/4\EӦvc'Z8:|х[R&nm0NȥtsB_asO_&?+ ><욬q.h]QMe^M;6/ͧժ/Úrj-OWñLźg=/[.e>ng|`_< M|O*,˵\???ߎ Rͫ)w?l򫢁[W?6>к>7ጀM\OskލVKp''*Өmmݗ[X7ƯoeĻw>8n0S;,j._ WlMϔYis!DY#yYgys&P%t5]@Zh4 7sq1y X>6(OQ9VMWcv2NV{?c^sMo\'ݗN,ġ&>[4#g!o%Y}o[|RkͶ]bNϏϳgguVٴ2oک#'PAoݴwՍ%qTc7=Y:sepJ \y&yyxh}In=I"! } ;˭n][K0ts=(g@{RqH  ,s3#5e. $H(./^{Iar922M|&`>} MM1'YucJ 9PO-YX^ZɃ;U:1DPwe2nsC<0;)1!4jSLqAq[aeHb5&cl a*5jE}~!jԪChTy|eG- 51R>t ΘMo0$šmSOKԵzGN4ȍ >aظo ;y&io#NޅP1VBjiJ`QY!VVa͋ZȷS~}) nRWN׎{=HrJ1B=U{N4;h2 l‡Pa'Yzdڽ+l@V/wc`*+ yp5WqO +8 ,w#7gn@GF t鑻oH ' Je$PCCZ$M|@V ħO1J"8)!-7אk@׹4 x:a+cr/IE|޵wcSpIw0!-o8u$!`KĉjGekt,k0mԩnH7*\L FY1+ l SaqeFޏO匂q6NƇS_ۻ1FU\%pw+ c{72ٵ ?P2+hG g6.Fv/z6Od*NaYz)V5FĿ᛻o`֜3dc,vc '+zwcA,Ѝ1FQ oQL:~U ?*KvDerI)b Gƒ|:q/s['-ڲhcBKAC$w̯B!!}yL*;uL/U'f !.EU7} ,Bz Q;'U[=GkR.h;l'DIAml]W%%bB8PLJάCލAtC=Tƚ9X(E_0(W$q O)!HIBS_߅!SWpQt*2N gdEn E6Fg;-^N_@#9h3*}]*Ks\vʉrNyZ4O=iym^$re jI'ΤP-G1Ae&{`>t\z23w{ʰilE3s)m s_EZE"άS CL >M&e-Mgx:z}I osI B Ng\vcb+߻ΧʞB6}]ÖfC.( ۘg"P"}j#(JE" /jn'*oÃDrFɴutm+*Qu(*pSxD)QЁh rIZYjhlqbmop,2(kI2]HrS5/{ȍap1ߏEȗ r_=VFmnK-*m2;J,EŪ_N܍Qsit9f0bٿ/?w7ּB]WŽoЖx!џuI# ,CSqXShF\,j[Q7ov9`c[6 |ڀ0C<81C 2`k z.ZllI韩M,ePmi`7PKo iG$W(Ig  l<\2[A&dq߉1fCTbRwbIu=+b3TtL!!,CKYýD( srp=OST!ǝ](nrTBd1{bjJSV94&$3=b, h{Yzjّ^ UI;D mC*YI ֻ yzA8.4%Niȳ(q¸A5\ZQLts)4}b3^ب.'/^>:zUn1/+K\"tZ8kœH=oj -8C1Tm7ƐhxAR^J^/ R7m<1i0 )+"Ui!Yu:{$\F5 q*e|:>V,Ko,ҚI/HEs+lYX:Wpw69eXԫ:ՋOxAz`/ 6aAM \Ƃ{mmE5o|5-4!6Cc5ӂa:%]8΢dCbҕ %<.=܍q:7 liqlGsRHxF]rPcJR7 fq;-$Wk2'bKTRW6erm q;]o="hypHXQ ⨱SVo2t2l4%b%&E2 `%|?c)]/5aB2iKЩ8Q"xYߔLzW B:_\Ջ&5_~4wdnݤ/U0'e̦zrFW/PcR,apeb4M27A*xMߗ1)2k~V.wx ΐU Ӿ~g~8$6pyf7V+>:~?;Yh1]"SehGG.0b#eqF]$v~C^w:P~QϿ<[s9FM_q B9+(.ْzKtYJZ5>_!>m7F1|x 86ޝ}~okI-kt2I@'c:h= iY !̈<X׳E1$B7 )rӹ]f]n 2[ckbsΡF8%x[kL@C݇m ibO1&{kx%z\͐\ϟ\GV5t VLA_9*Y< -{KX;W&'{4lN>'tD8si\IMCqL?EV8bpjy]!Πǂ<(P!T@[4 9;VJ'C@>>XR ]Y9g{2GrqK`/L XF@Fs":bDְ-:phk-dz/ym}畝ۏ&oĹkS\~ \keS+44?@@Y϶宥[?g*/C\ \ ßh×_jXkvrrԠMc8F4xÎC/Y/nf|>4#>8]vj>y?|w/'UMfM<,t]O6j1?0e7y20h?i,n/re7KHv_YkD!^dp[6Y:?V .g-Ӛ `1b.*No$ց}e0"NxMw6dnS 6O iZ"L5߭*m͟7AqLnf8-|f4ma>Gjd7diM<՜}+r[Qhu\4ж+]VfMGtCz_E282.B|xZт)ݣHK]<"p7FIOQp(-X>0m|YNUv*yƝJBZĈWQ2D ,  Z'$Ci|M7FѓFsʨ~xMדە֓Ti% bKgeI.kܦ):TEΕd&ذI;ᰃ-:C`ۍ1!`1*pnLKr,Yz$z  nu,іzwAm1D aނ1 `VY`#PlƻXGȋ#(~0(-oQ^NCchُD]w?ӽ8UӴEƹCI2JE#:ct#tmp-/}’Hf_}YzlX{ģb 'T"HUu>/5M-a UH/ '+]JGNKn4k?墛!=|ws@7N8x^E#uWI1!6 ԉmHDusE9(Uǎt#೹:.!l`5z]u)e3 6f3dl~σ G4`c#KElr.(GNphLv93M΀NuD^]@Zx#EsN'穗0Ng5gnv@ŝ1F4c*1H[*hny.x\s$0pbiqŐ;B|qł!"QEIqP.`HIU ¨X*Iz\ c#61P195BoWMGx| 1|m"X$tǮ'?q#H:^"(n?-Ha N_[⾝R9XB(s܀ ӞF !6@g((cs|PZl!e2dU/_>+-}uW+\ާ[ _ Frp#Ѓ7ph jFr 1?Qb҇UpҎgsbR RXj&8l5he9~7z!z= H0ϣhZ-{s_$jl$Zl;&߀TE1G# A_뿒=xC tl]'WxrҎvЂNh*(/Տ 1<# sRNl뽬>-uEؚ!pmD&D #mYxb2ʓ-oMP $08: ֧*WwЁ6qoqT4&Z2/np%K̃`qӪ\-f-ĩZp88^ Hi=ɴɔkZ(HL>%8r$jgxEBy3!$o [nF9 熸ڜ#}FIGQwuzi;N]ΔBxˬv@w?}moaB~]zok1ƽJ՗8Ʊ::0Ǝd3Ў|uw^zi UvUC {wn?%BE(A U0ɠ"]aZ`N$hZ ASʇR 29UD#-EDvj[`-avtCWOr`֫+cVWes":4KfjՎ&6r5 ׀_~ >NᜭF>x xwG_ a+dˏljo= ;<.oV+d~>BJ=GGgc.cD'P7L&-[/Vyv el/3Pفq -%@%GϞ1LIdt,)VAJ4 8| &[:)Vv| -7yOzcˮ-NW-} Єy.Θ$ (WjI&-:߽b~ĽhsW558@? mtqïO/>3|ȋ4uIj $=OISr˃q[C6uWcB-u!VIbŷI&>z C"q0oHkW5ZGw3TYKäԥD# oMpolg<2)tɥ%4jKs$q*5z`Te$vӔKj"l8Q}DP Fed)\F`CPQtDSw$?':?zX# 8q&@Nϭ&GħH 3:;-Ke)V*֫``: ?Q%zT)k#CDaNę+<̲ "o9!uӱHǖ)[/G\uѵ |F>{?Kfv y[mH.E3DNk-&̛#)1" K$idQ ZI8o|ͽ{ǙЄy Z3=1 )PwG5%lB m$gctK2P\L9|a[,I\)w`s(hTqk(`J,>E6S>zrrΕr.| W잨l| !j ŲQȵ:1|&+Ӛyk@*r`0(:Ђy E[m8NYB[aNbcWPh<3<<| F̷qd'L^&}=s]Z4d)XL }R3 SַSֿ)냚QyXz`;q'`G7Au95>&y,If2eV1'DNPTXJu ȓYHEب M7'L @r\椆c^C>푗#5Z0պ#IrB+˙!i)'AdFЂy%_si#.Q hWT9JŒTj(`^3Z))PJa m0:PI Ph¼ ^8G,(#2PrR05Z0oh]L:)j]Qk`,FS-)g| [*ZVM9F"R̲N8 ZPH鿆Bgc)%i@Sҥ5R ЀyJh%@vڂPVN!'>]AUC4}FY@G}ac> X9_/FGgzɟ_YSͧI0?AۓcZHس^ϧ0:ָefROD=?]U+;;J{g zڢ7:Xk\*Q\#;`i?óXn@%=IQ i t}^ u#/䱊Q6v/D{+H^EBc]z01}6K8mԐ=6lW4tLBdΘ2xmx%02iN~}>7ƋP.n :%u7wĎz 2RYv qb)~>kqνpm{ꇶBKZҞJ#٫&閄>6.[ĞK)Yf+ye.FWxKN'įWm ܅W .j >N| 4[Xҝ&7=ѽ{;0Lh<6㯾E]Yn`_DHP[By`.?|,WeEmӻڻj`nek^CU潛d Jی]|wM~CZKyyOA(4[IXj^4d"z5ꋔ8znK\ em8Ű-q5 SWS\/QK'Uf)g@2۫@K b]fӗt&?^C.ipx^jPY2+XtId |n8c{[NowLkWIXpbf>;"Jl1X#Yae~; U͏;dg ީ>Zm;^YW&<[|>~P X/8 zY8~pELu{ QG/E*Ywʩ'9kGA06INz )*YdcW< OmBs9ކW )Hb,UD n6KuFqB(s6FsKkݜ.`DK–is,!F(o"NkF--G3|06kgw%MVp|!D[#k%*hNO$9YbpBH% z6 VbD؟$c2$@2( ^P,8mpxC3 77&kMUl3FgpFklu>Vu\;#$]jIdŚQ wpgdLDž|w=ɴ!'Lwe Ct vOo*5:#GxiXXs@@0sAhH%4qTs!z!  j_zjPyxHE΁2P$}4gOi N.M2J JƀR57Fgp϶ zSϾ& BJ5NְO'arJ6O6ϓ9=z@ 3Ȥ DFI٠ufw}]# r zpW|z2'S4*J3,nKGq4&rangdO22tf/uAG5V/݊ #{Po'iݾapBZAX6(پ'\^,,xfw<( GxTt}8xR m5B{C0Q)HSk<)R\j ڠ#.{YTH @98+1VMi3~MJ_Ц*\F?9 ʢ Û}1'$@kn5:c?ؔhQ`;D%-"|NO=դw'z />W V~p XWP$푀Ţb֩s_SMr32_btk.:cw`_ oUfg-iS :!NkqŷZpVèe֖f_%aန]V1M!{d[JtWV;|&w{m*F])-!VdhFvvFo~pWWYɁ5:[M,m<Pٚ΀zm,sFO!4&dKE]EAr))/%;_[/VAVUrXˇLNtb?fE$ %MU]*T}cTo'tLX38=trrRW˰?~lp„ CZ4"@. & 'f-6Vwô]}w8*$'Tr&_G{! Uu12,z~N^ m_Yk}‰V5::s8Ƽ ~>8%y>?eq=i;FϷ}lJ\ 1r /猩Y_3N2-XS)e R%ЈF8܁G8;P߮K9-d. (< 5gOX38]^=S[Oʯ Iah  ǯIwB5ؐΫ*%@veS!2BGp$!ے'<4ҲW(TH ޵Fn#a܇; 9b{ &/v=ݝF6bz#%W)C,!AǠ.iFBZdu91@KzS2P@^`Ju#A:`ha@SO*YK=Q[ I!]S<2cŐ 㸼C&\dmic`$O>6NQ?AeQ8:n)2cQ E@l|[#Csc|B@1=(Zdm9H8 N+l5|`iic`Hs:NYq;NpKr6eP)HK3O.f}wȚ ;쓺MIQysѽV? |O&?n{8y@ɋp2-bj.i `b&P2`=F:hU-$zl]750?Y'7f3NԳw1&KQKlw.!afFzQJS`K3gV2A9&Cƃb l"LDFc_10sHz#cq'wЖ6f?x{Jz$0?6liA`Juk8,?&?0KX77UD=vh?0_ϹeL%P#u2F39ͬ܏IySH~X0>>ŜQP{1teeQ+ bKC3G > ا R7S11lIci>8vbR# P{FA@fDhƋ0a+(ADnR9iTe"cx)lH|W0'RTY;t>k:,6fd;բ>D2"dڲ_>>ЎP<΢X2)CPϹI٣;4dA` Iܤu.Kw Z-;Ɔ9vգ9k kBTT|]C)c9.9|Z{GYN-m ̜hįV8^'uSs+*5BF۲{hxҖ6fz NLKӛ pa>*D3=&yue54r*[폨>|#Q#%H'uK3'?Œr4zq'׿Qb8β/m $38%gdI\GO@թH Q#@`b&`[v[10sژ>[2=-ng da^#`8rnc`2YBaWeͦff̟/{h$6cgÈ=e9G2HȍwR=NvZ9 RNbYjoc`$p1o 8:輗. 4/ p}ߟ/xzFO():lgT: .һ u89AeqۅH`naX 8S?Sd1,0;nıvNL:K#:(+PM.|m<4f^q/ZN)=8WQ"p,!? gٜ0gYGqV66مzy>̋i(=>a"b{ELQ4Q> 2a/ !ӹ|p$Hʝ=Aͬ)˹}ڛ.'wOILɥG/˽dJL$t:gbuW},j/ﻴ{7/.H?{0_.ENQK_~e/3ǹb;0>[y;Eq65lnGƳIg;sEtxv84`Ezdq~9ѓwU-HįVb#m}ֳjaZ?HYS[M:_|7r1Y:]jAKvItYj3E%ZR_^mrR>\3 `H E# Ґ iEޟğk;JVo30 (vK?Wo~ &Nŏ^ ЫbA=~핊`] 1ЛYίՈ4EvEݎ ADdgɿ?g>ͥw mc/MnbounC!u6X, ,~Qn̓|,_e$'Iұ~cE IǍ_ 9ba{v2W' \[pCs\0  VRJ19# d;3}Ay[wӹ.` ny0O X2)7Yt-ෛͫ?-bB=7I Vdrjᄍ,ypszO?H~\qs'Z^(8gz^.2VOz0Fp5JܾDuvKu8vzI_-\| hWwlru៽Yi{{Xk 8ԋ,9I_?G~k=a, P@z٤,,t`ݫI "ȪEo#{Z~.6|r+ "XlzyЍ 箙^;릫4#hH'qwҋeY~ֹvXk_eK`sz-s!°v ogv5v~~y;͊ C;6 #ʻ&^$;P{~yX-͆O}+rCv5sF6TzIic&r'Un#F&q-baB) #rGNР*fFy/%آVlEr;xꙻ0;ZG}`xm=q[fhe.bOr,\Mpmfn Mc7j}k%9ωGY-'0[T8rBwc(LNk c)9e9F* PʽppϠvr,ZI֝J19Rtά"A@U9X(+RqpwdS 3P `ߓP lmugl$Vr$Gpa8̕bR$2ZdN0?^V/*Ȋsf18Y%X$}JXNXZH`y&"Ŋ{Ak,6=[[+1!jVk]ڊֺ{C Oֺ&k]ʇ< 0lehg,dJK DRgF0$9eDeb$Fk*%R )Ɓ3YQ @2pv4'Fe/!v:r坄k͞ u&}#>v3h>NPHRj09p$rDV}8L,+WY Ħ\a7Y d"JI)4?!/e]U|-@Nxkr'Ha ngi]v@홹;F{iF8ERS ( vp)BFR:8KO˺"#Ho}u2GpXf;Y1>3a{B@Rw@B UU*D䯓iH| b#uv=kD+W^Mkl Dެ8f;zp.T~qrEy]3\r$lK41'KWU!3 @Ju옏QQUx G CK m?}-`OfH.KϨC"r+rX[/nfœI{,QʛMQV {0WJy+-'SIH/I+""NgaW{:U8GꕝGz_@ =XEٝU_QŤx" pyJ"uڎTd@k|Ƣ_/~ r{}[ϭ[=Rݯox>Vg>pSNgAg_L =wfa+J,Hw>ޯ稒d1Lw$OyZodu]oHvWlh}46d>ddwC`PiYrK`T񐨃mݣQ{E'omk. miY%{ug32j2{bk*aR;t{%. .O97 Ŀ$ϓ¬*LفVgV=,Ꝥ|/ 9۵|GX|_*)p\mGa1[c I!6 ^J]-P31ER}nlLC %T0Sa֎Eer}= ;/_ 2a8FGx)[P_eӽY~`WsKa ?M'՛F.0a.k`4VZ@X{faM8[~{8]F-Ty`"ueo5ƤQ+qV?4ʃK6~zmnƄ4o2%_f:Io@=-ºlAD'vx܁c[Mh崠-qns)T&RN5F*x$2KoB`qȡmdWȉXKB㑉@J9V;1@C.++ֿ5dUB})WO%|YoE;|׷[{>WewbxrW3@arר*~®`)"k&$ 'ޜܯ".S5 :-R26Jp,EB(S4Jkf&r(/âeZmdaIعhǟ9:}Ű#;n{BMwJyxalnes# (נ7B".[GQV736J L1a.%Tq>9oZG$< [>wɿS1B (Î 8[eֲhHuX=̊jxy‡_g/.f > ֩B!Sd8=MZ` 8&}Ŷ%bGQ찆9A5bgXiM%l #dk) $^12%Nq m.a; )Gl*qA[5A^h?!\2M>HE(fiPQKM$=?BnkD>M##b^wl!휔ȒueSb.3- J!S`5sj4 Nu&ejG$:UO(w5ΡL N[ivjHI.:ؗ,KB V˴4;dJH,z6<g]AӁ(\bƒ oi)IAzsasjd_(@MYr Q@eHwqyfnF; _5C*X1GQe Sxv2t.vQ.wwNzAӇb+lۅl2Mqٮ7qNۃ }HӲvL4@DfP5`VI1;D/B/ۈtAHܞo6Zx=,hXb; q,ٍjpmoeVI2xz\d|aQIf&3muUN=]z͠vw7K|G VbN智'\ja3-hmնnTD|%kQ 5AW,lRTT=7YwݬgT٬ @'#z-*VӤ&@Z' oK|߁\vyxS.ZވϞϺ ncCʥw(iu4 PoASJEi)!/ԇzw +* 9Wj<Bv>*=iV }l4#F(APhĎ+sʈdZr+$+*ME6xk9M h1G,S\b.uPoBsB MXN aYA é8@k xeeu]jU|q$fj *}_Nlخ.hޏɊ(l#$14PJ.) q[jZ[ghx6Ɯ* Bx 4D+Ƙ:f]Ln\Bo%eYim8 C!()#)aƀ\k@2!)rY4$33AP4:phx*efK0〢<{Pxe N:y4v$WK°AJzRk1̜\ і RRZ Eś"Fðay bF1l3xC-]PV-+F+Z\ڊN#Π[w[{5+5 M \қoYJ%DJvB:̑8hp^])?v[f2j@:X;NPdt@ Dh 1^0+&,{}*w˫ѫi t[;Эj{,W-] Fvq \"0}(eB IWiMZNשGJNIj@IDe 'A0x$^᳗5|np} DiUPښ*nP'NeB;GF<rIԕh'"l*&Bezc? bD)1@jE^jjgJCA1©`j1X<( \ ) "dK)8vA9Qo~O!^tB__uaӭVa vFbƭүV6\zbu48K>WGuYG[ ۛqXl7zvStwَ@ !; 'W 4>E?x;CSAI:Tؠ'Ljb5gyI(HQ%E_9 "2X*gT *3BA@ x5g`J~ TŐ#Tԇn[SFןsn ;+>1a`uos0pYnYPiРJ*d +':,q2W">ndGz{Mssz$J&kk Z=B嵒XI׈VB:3[pVod[}D@2 YW/^: 4aVsika'Yi'ʮlz2mdꬥ,d|`r^2_&k b "z Q-g[/5Y=QYVR|VdR6Q^P`y~+kr_fAE-,?wxR5ΟjT`O#|T5(̸aD3b9d8 mp.{A)hA! 05l(kA!aa B(m=dRmH؞eQ.HƤ0N@u`(=[N68 cEc!1M2V%mQҺAKXBqW^,SQ'A<=wg xǃI(:ufo_ta>hMu*)q #2v<'^|դ RVmP20;kו[Ѧe}w}#%+G޵=dĢU$_svU}2ߵ8}r o]{;GS˵)Lw3Ϣ5X3t@|qo%Hgʥ/ٔlc3ΆղlVAL9B1XSf.E @()iܸU] \*ك?Ň pn&ưo/6=f&{;Η. .Ʀ!kYV9lNf7%_F2`l?_ي݊sk$.A?>W/GEڊQ"U&dW"Y: Y'K~ ! `i vN(wd_Gpd[zKl#B?vTA8Y3cAI +uC5m8sCA SÄ_W{֒Y+ ZKvΪ'Myw5j`v5a.7C[FDzorA%P")j2I\:?]fjKV.#~dzTK`, Tq}qIY^>-{pcf0a ^rÜ53^[c5 ,Ӌ `gFy0q]0ieԃQQYϰ;5h~M sŞ)u懢ҁ}`~q`3fq ▍dմ`>) [fJ=,җhA?:1t{{0sHg[Eu{ʠr$8F▱ٍ+nJN,fUOO[n}33`b]35X u`˵0Xg@tZ+|D"c% 6{,4b$s1Υ<m XtRJJ t㿛I2X/+3*N2+)Dc<NQ^ST(*(PSTEbwO=Be=ˈVQG# ' 74 j݆jMut`0Z}3ұ\u\u9&S($bHcBoCWm` 8'ptmtmѶ-K\t5zwuܻLp463fʴ!,J:V4]u")k)9*LSʚ L,A%gS4af;hc3n{Vf &cedKAeѡj pjPHKO8P!et H`RGQJ-SM0][Qg&l/=cP l|(`PiAx<˜t!8^o6&H遮.n =yjhtॵǚY@4L4:off-.nF< osSqCwLYq&*DU&BTXrBT@d i79Ѫ&(y TUiT=UDmYG!P:~<(/_3n~KIHI/ WTpR+JyV:? `>}߽c.p[^84|^i^KJק0B0*n5ފ£Xs&{|lh5QU 'B]S:-Vrn /ܵ1C}O.zh*ju4hS[7~TDNabJHS+\|e8ZVmu`- }' L:,3a= $i15e=N}JI RV}o6 ktX &P l,}NEYo:͋`p0 wxIT>'ЅKAf Wr`*Zzָphuj.L {0y S&y&tRm/U5\k/%L,W*Oϣ!=ǣɫ0EzY>"'gދ^.:wK{,֮]h|.{gGExOue0yx~;ζٽɕG%08b(4S'~?E~N6WIBg:̃ż_͘݌Y͘݌Yy7ų_,e9wffIkP"tZDS]o h+Q}m-*/WkJtAJ8 Aa@1Ă@-ۅ( 0 Q&A S%nw,&i-Yx˺;В#Dx^ $Dj,nДJpEq e2 !HI%h5 `.)`q-u@#(ԤdO4!(Gr/k&ާ7w_|HSy+tS Ϟ(5V|5bqy+/Af8"E[8X #}7\ M>{=5xzn]:yR\(B<үӻ$idd(JLLMٸuUwq;PUݙ)jS&Wc7%ʀ;o >,Rg2DVLHIJ88̔nH %0lyؙhrw_}iW٪`+%3W\F ZJd2֊WF<;*罝'ye*ӽ2U=WatŃ1>]t;Hj[ŭ;.Z+j38a[|$o?퐙e#Cqlhr 8M(rnӂ [c. HChܸEȽ !ׅ70{SƐ,K> =Qx/~|6Ğ-ժ  uxԼٝxgQwZF8ZuQԚ ad>M9܆JR)Jp48]9p1H30PMi` &lUN^I{H\^Q|HzoLϗl|_ 3mӉvQt14 ̂HKBV$-VJ\rw;k6XkIb8ų#5%v/dJZLYc&GkQ:+ )&[)Q[M*Ffd j0DFtLvUԮUYܼ`͆~ df8O'w6`U4t 0G 6mKv yIѤSu$LEt!"'U6`!@7>! \HL3 r0Gd1MhVӱNǞkF jbKFbR$S, A9p0I̥aб=AN:M4MM;k kA)V YxW^Yvo,.Wj}Kpޤ JgP*K9ʏT*sؕajB J.(ի=DޭׅW8aBͥ!>gZ{H_fpN6n { %)ڋ[=HCRP$6`[ STuuՆ5n lіY?|˃t5+fYflܺnqʩ+yJى dWYcK9{KHXc] "'^q wplJan%gWiR-`sy77&64ӥwI[#R)!͜.aF;yA@(hhEaX?S`+¥&L4 ZFeF µ ^jFuκ] rW`^qa|z+)= l$ P8Qbs:Id&W׉#8tX^<3#w?|}mu )C :~PǺw)ϔsds^ Xï7^Lv #z!UTu<`"1? }"$΢o|4};jed;\7;ˁΗzz9g!Iwbp|_[ke-e(0"k"Q V&HSE57V"F X^3ꄷL1߅$'.G.A7=?fP&6O}.;.:b0^ Ox/ ҕ2y?{p)!)ZxDGP fS71_@>9_ ^pbT:o)F4xZ`QY$E(i#Wsͺ9M=Iky8`X-:?02(~Ødyyaj]W.nԽԮisn`lyNvv>.;&-ϚaYW_Y _΀/aFuIcq}K;6TP}㞻T2hLNv1woq4I]'"-}scUnb:;쫞f 7eʳbQ?v87 rɽAof.ǠNPv KQ6^)Dn$q?L`DQB?\YRWr~1Six`,(U2xi23ŃPQgUNO#9ހ1T)b+t@HmOjRz\TkTb#kܩr/ǏC)5\ll9C6*V*L1Kq:[/ Ƙct*GޠT2㺼Diy Lq,X2ti@8+Clwm x IUp;oB!1qV{CHW0gz4‚SgՍ=j_rrR SK7r0zY+|zz)=pWI BGTRJm K(L)&zG . Ϳu{ݍ+;2yueڊO̿fu %9E4Ǭe y\Ӈ4s-G1=l*-zUR8f`:Kl*hiD1(ιR,Hh΁{ \\ cQ\9H+Md$)Gq(tQ\9+GqeJJ(ڍRQ*Bŕ9UQ\9+Gq(ŕE(#q2Q\9+;rWQ\iiQ5i0B~*h t@LKŔ+նԊRyPjarU~HpՁ Uܿ*P}>sC2N2Abi ,"SXm)$J2PXb$4r 8M(rnS 1rbrX+/B"IQlug{hrIC?0!mRFX{<ˎ>S-,inAgL.@n)&p`pnxu5$I9bWt8>PJ^L'K.şz__3vV@ʌwt-}-fqCgCN[Dӻwշw5PI>w,fTSqM@d̐قx` z͹@N#P7J:$v^tRu_:&f:I1J.i _~3Abş)zgUşl 0nX㛂L?+&MX@ٻ0}?j4\к֮ZC}8f@ V匳s]]-iU!&%;QPjK d!&:*-8 Qm{0Qr[/kaa Mݕt!y[|sv=p$ Ɲ{0'(kc3ALhm,CR@ <* ?s~]`p# @{+%RӲ]ug;DŸ:$qZWLk#\$rSo%bwV-{:O`UW֫-ͥ}A&U U>rkeIU%I#-^G)GkQR>RL@+Gm7 ,3A.V粔gtݣ(:<]>M Nn& ~g7~fɳxI zȾQVˤN0+D{##a:0- 9+{y}F\H3  `9AQNZ @KN1] C,c,caL+ jbKFb)DXҠ:KFbOkq1geei'="= D̯BpUVI8:^myQC&$`FP|:AEdrhɍ2%ė lq#c,qG <{ݽ; sZKT%@Z"J4^ꀹ!FdGHAnz ap) (]:ŲyJeij]@4:&UbaQi@c  `On?ju/+::>Vrh",g7qUPFՄxuT4Xє!(oZ`cYf಴?Ҵ 8p˨C[-H`&p8"IJ"HO$ cx]g^G8PݪeָOɷfw22.|[;3z9E})0[Vsƻ4,f^4)W-5X*0@0QZb3_EC"7IV`$"!"g#tv!dll2ɹ: }]ך@0! )K kKl]s:fŇ44> }|M12*^۫B!/_OFKCJ01JQ|>ONwAMI,׀˴W?xKcL~(T]nONr2cu"=+7lnWU1~V ހ'Bm-)[jk6ÑhmfQXh} +3f1LUoapV j3ȶV[*ӷU9_c}= 0Im3\ g!C+C抨z,zAn+6nu4Uᛯ0GtloR4^n0T~6`vp%;P߾Ƿw?wA)~|7LKںˣ:߁;MhZw44MۡiAu=Ỵ[j(>o]J?{WFJ/;>4wޗwdbFAQ,IIˀl+"ˌB:h#A} X u}'u c"F#7G/l ɻd$GkbLy.W68FG(LTÚxdpaq'Zނu(,u$T(RbŒ32Eneqy;;bk!p7%Bzgj{=6:=$q`&|kER#rZ$ ~bFҗ1qsm6h4XJɁSSk)́ZOwD6SWy(Ab?kj Je~..R#c&ˉYJ5g:q>}#䤓O- xثɫ&޼R aZOÿW~*+􂡒zC#S&fw$qgbp,b)( l + n"ZooXM}^d֨ xAqϊb};#Z t׳e^= do`\򛉸*j6 qrQǎռL_*ѓUx4c6aټ6+۝F (0h ^۾叆q =B8wr"o0 &*O2".RBVy-@f"4u#?kDsYv[%MS;bT(m-(!IbއP `GnΣhA~iңS,K]gmvO.o5pD4q恝|66a΢6c~kv7}@vyՔY_RN^ /H%2iTJ|1RXЌ(6F脦)0Qև"Ӡ$XDc[V[\$.iOu(EUcAǶk~ jg9=EZ.S4PWHȒ)n}-!p(U&xCEq$(@qczb_%&`)JB2")̙g^gx&yWT˰]aa8S C2o0;50b0 ȏTz뤖j^  ,fQc_'׿ƟA|.Frd/kt=3z3@ue:(2`d)/-su~ULd5P1RWPPgQ=z8~yw *e]Z9~J%,,t}Q`O/~3%gc"4Sm#KEߝU5Euo/ϓNVEfd;hvuX懲AW竑ڻo?UE_kj_x&''k stYsߪ([u_i>5R%w%) dkMr+P;:| )rA(s9Zg7iM} ygMõ}>ڗL>|kަY_Lob q35x.RZX" q r\3>RF޴1[i2e#]rm>`|:L(ǘ0onÅ9ɕI1@QZ#A2MSyvaFDk36EO7>!gi5Dh93;x:5ؽ+i֭ 0.)%!Ϛ2)X! Rkx7yV=(pu[zopjPBZ a'+@ ,γ寳m[PĈvg;`K (X XcvFCEߡb.} yۖjKH# PB*& F}.y)7R3!oE瘖)y{M?M";WԕW,l$IT!SB7l^QGo4˼)w,Nw_o2 Z|:/B5qUW'w/| u4+P}+Tn\ݜ `WnwPu$[{o-Z<?=S5]6(7')7KO_( 9gɳiv@5m-(zE٨}ˤr%䲶G"`y<@֣kQ&Sϒ{vVQRKn;,ۖqYB-NImiɛ%G0],ˆ:2 I3 m{޶,/6p|<۬ۓ8vAuIn 1zd:is-|( NSA#%0kNi|ʮGOJ %%9e<u{Y|EeQ3F:6%fIĜbY,'drcuU)@@p 1j0k. Z)eTs6\}qL <ݖ[;-čahK;Gd$IQ!p8c8H%  "X $wLh@D 80`8sLpgcLD"Fi2$/,+Ś3У 2JHUǤLJrc3h6$p; ( BH  NY]nj zBTtNtkU6M" +@'U0ğo\w1 kb Nd^&8L0QNH2O8?Um7+}V>q48lpٖ(b67l\}޽_|lfHO&Z\.5W^n3m磵=.%b\AKMZ!0(jyn> ƺX-ϵf& a#@0M./u^P2ݦY .0[ ߯o6D(""₫ ϰaLa@w~rX[XuG1hyX ; n. ٲ;t+hs:4=3k-I&Æ_7 dSʀz6mS;2 V@kt |T_P{XTKE>wT6UK=!JvvKlTdZrM8=?l/JVMsmCkq2w@M0LK=g:9!t>g:!YtmĨ w;ٙ,ch'\|p-X6k2&ֱHe`_z+H]AIF\!5~NtML^(staLqZYguTJt+SRpc r< #5ww~H %:o$}`:y ϯg캵@`i>a7qUE =՚!nC:pq3Nn9~&Gn.yj=y LB0NB8Fd {ZDSC9|ϩ0ͭw剴9\5CS<8r3x1Tb).@ =߆A5|w0o꛹uq2{>= 'GM(E3[6v^tly9^ΎFlZ[f6BC?4~|6n #FM邼.èFzGޱ]v>p n浒C.__t?5о5?a.ow7WT8vi;VK~Wt_tPb`z N3q_Xr2J_Gn#6?6W`="c <"cx<"2"Ac <"ȣc p:_>^eܹfo|ޅ|Y8pD4ImNh!"9 AU@1ȸo5EThR P5*ylKj~ jg9<ZIv9lgI?qFQy d FF$y$y %] Q/*xCEqd+Oqcz=sR6c\L1P][oǒ+_Ȉ}CI,BwWcRHɶË(J#RP$1`[ /SUUuu}FB"{ICL@.h%'6aKr}WKv.N\ӭ0+IZp`BF a+rPH3 NVp90VSKQۤ^5aa=-qDjv%3:(BozsyV#_.? ]eǬ[Ņk170KӛNoRoF흂l3XyOUX_GìG50dE:lwfv)sų4A;VN)^Y)= D-wtjc)KN=iIm6KmɖV6l\MAf;+OVn%YՐ\c{%VacUXv^0 ̖-5%r-t.,*ݸ.*O7LF`7/x)%+e2Yݲ<͆5gx ҉6If37_C³ɉ.r`(j8lOy[ r)3[Pb`,ڢdILG4ץaYp"mI߭wgĄɛǫm4{ܽlܡcp|7}}[}wZeG#%P 4Ẃ5 !,Lk=5)$e.Z,d\ {%>ĝ, +F3!1/8͙hoRdoRdwO ,)X30&*GZ N7I{%;)4Y謵`2E dGH:OR`Y_6ڵ9 3Sg 뼱`4x2 ]B*5U!JyЛeL1hA."kY`x䁑\J4$Q[A,IK+|\:v2!=SF)Tu5zd#RAVUrNЂ swƎJpPZHg홬3Yndݝ2hvB03OѭY>Ix'$4Iimkr'sf!;E4LTcDNhky@B0e'G(O*ytQJPho0z3xDֺa⅜Q̾0ͧ_z2H-4Yz&Mܥ`r=z(Z<1\x&,VB,#<>q<#3 $@esɸ_sI%Y4ZA3d,K֠VKB [8+H9HXe֟݋=Uk"=Qǫ;I5ɲ5Nӛ6M}3/|iz(~tgqt_9Y'͢jrHJ#.w#x=4%"%7!pSGܑ;RcLpL]Sk$x&<ÛI}56Al#17QĬ+j]yYJI չ}?sE^R wjSVQߖS쏏Ww+r1I^xHFwQ;J+Cפ_gw8>sӼEGΘfw@'Z 9K}뫳燓@K\a*M]P]:dށ{M'4jڛ7-ڠi#l7i!7vrsMŽ4z,XM~SWu^<$c¤BNƫ `6) ()P9o)^\1ڭkC{^Sy'>璳oCaH4 r>eV(R*F0ʴ+WU[gVe\io*sw^73 "*-̤AĤ I$!n%K?iyן$ZIr̸J82 K?!.żR*5 m<3)GBDI :y&=!$U|&|E&/+miZK??0ηͬ7vqn,aD$v*y,0J^{GBg\#2K!vۊ,5Mj۾|T|`};(Ωmȿ%:ߺI~!%<,>/GK^d@r%V W9 =\~\{L_G)Db=G1w1r}/E"Pg:yKni7&DPqɜ4:]\ެ~1ya"g#ƏZ6x{SB CPkdk)#(cf$֩]bŬP 7'ԙȚH<7VH-#4R061G[Yx:ʒ;.j;d&';G"ґh5;EI&~9I5d. [AʞY[[፳ J MdYL.9m!(Y`sB&XRi׆ڵQu5lOO/O`Ln&iFCOopxe8\u6`;w2+a.HcQ(ƒ̈́Y(ʛ0d]6`}@ʾ! ؐ\Ŕ+I%ed.Zkf!R tӱ7cqƌUPQ+}.Tp5iZQ19 ]*EDŽ2XR)߆5;M4ӴmjAk{G}LX%Zb 9&;CjkmQJ%U]/m$VU(*%TjU%'/T{T 9rt&N/"L_ejXJ;H#H%PuHŽxeU)d 3 *sd%j YGfb<[2cxAFԃrь03Oѭye4N)w2H2)-#b,X=dǽY=SlKDa } j r.&yV'2 p.f Z.cD\2;"W+/R3ƚ&2ޱLY7zmQyNOCgcf@D$.s'K7mxf:i'BXJI64tQØ"k̞kҨ5)$% 9;EjWSLX1Ay$O?&56YPS$VɄ*PꟕU SjY\~ƫ.*l$ T\º]S"qPNRX|Oua^wcJ%RWmyX)ϧə۫ wjLbI墂8gUu2X47ʜֽyr1¯ɇ20K L,|yU-Ջ]r᯳i/7_AahHom: GuefBpT'Xi BOc-JQI:WAZnd$ ́}:lzKjި:/)#NKwsETvKޯooĥD:M ]vB?mC?ץ|MR뒪ޥ-STɟ@I^o/?\}Ż-l12 .IQ,pgy=ֹ񚡩b{ -hY ϸY2_Dڗ*DO f2#<}O}:W]b}]|P#HBzv,Yp (LW2a)qϤ)9BtMtFS{R=']]z' `4{.pAHS$i9P'ˡ$Lg7=O<0A}<=zh{:I _:N,$|奜o*ll *Rb SӅw{_AÅFBHP#a#Dcbsؼ/*I @'G/f$&F_y3n !8M؎9R4 'Þ24 aRmpI,[rǵMj)KdSOuuŮ؀o;HMJN7~89P:#"(P Ŕ+`GVD{Հ ]wR;- <G-LuLk=v2ő$ I= "$U^r 8M(rnS"ؔ, Ƹ#LX+H!1b6rvcDŊ~!h1lI2)"=O}*sCs3}Wm` j<DCU| P' *Bmې(daJ )]uK1l' ~)k)i~pטwqco(o ^ّn<UX-zg՘IDk1hFs+%"G}ݫK=db,QKjNNF=U3I1Z3vyS2$5I7c9N?`.0 r# T)0N-VJ\-r\fL[x)DeF-b?* 8: آ- )bRL@Gm7`̙rkl֌QI\~c|t^P|<}dr;qu͋_5(`0:O]t@ȀdO $-@tY$ R:"**w8IXBg L:13 A|@N@tx#VG䐰;.N:{A2FHHp4#ĴLz ",iP.ee#LR#1sHvkqwIZ'ig-aޙBp]c cG2|d.8$kKo .a\_e%(R`K +Lw$/$:=ߚ<9fٯ(:=<:bnRc=#fܼw@H@ @ tL #'QQ]@] ]K;ki "DF!H%Ie yTc  )I>k"VLνd_D>{ҟ Yaޅ:uAV S Ȁ/kкSj{XdG6;),;lNvyxwIKGK-x4_t{ŜbVij|Gބr2kޒ#xΚ"nI2uj4Vݽޣ͔[g6I6}.I9K1g6I|zW`tyb)[ck\q3}-·@}xz{Cmokt1ȏw-#z!Uxd!\>HԔ1"%HFрG!eLD:Eݵkh{fȩi|2c5/nڿ76}.RVӋ;EYetrsrsw. 7힞LjCw{+E ZvSYOp˧wA2;xfYw"WGN(X(ruN&Vyyucja}wNJwu-57l6i͡L׍v꘏N˝[swaq1-RE2_`E'Oȷ%c{Gj̱ai7Z6uTG5!W- W1Π?bJTlRGq`nk7JKu"":Cr*.|Ƴ(/EM o&I [ze0"Ә@S[%"V{_(9'9 De>o j@ͩ=w^t꤃,d7 ?RкO+[>/ a\,b"u_H1HIh|tLN;4SM?%rYQ+3VEiR +-mĨJd : TdgxaR[Poc}J!,E N&9h6r(0MRFѠnd;rK.DIL}Ys6vm7]<Ė+Ѽ}ʍHTpIyG;yA@("0F<Gk6( ˬ$2 a ,"Mi]e/$,G*R W@!r(R%]JK,W$zyy T@QAsglrD`pP5*:dr"goY/@SfΟ=[r6nٸ٦8eӲ{Z !-)9T4l75IWZ6-T0JQY$Vo嚒8Y֓ xOa]ЧltW3٭lQOX~?R^+k)`NMa^!X00A*1jQ'`,96jg'A7h;f fPD9sNvpÓ;c&Q `>I1{h;sҷǐ)Ȕ7DhִFE%w?xUUy|L>">|+QEupN;lGRï(jڽ};뛒mMᲟ`pR0K%[H8"ucoK4 nP\;WF'i[RcK1'If\KלN]f1H }dAv<0LeFmp-ĵ+Fo[ڹ^Wj>(a  `<۴MW]m+, V-|Y~j<lQn ;j oG_)xu?lh^TwS>I^wҫٳq"" ]_[:7nGTjϯoW~S kr49 | 3M6c{Biss.+VW/?j4ʣ'tf0.Bh*\fPL+B02sօEOZGDyD1hbT>"uڤl(AxU-UC Z9+(%)fUzǪ1HrY :ƾܻ1[HփZja14 $SNl >TUf 6*m3$2fD3!]Q=%DmD{(\P]ru# 2] 2ieBXe#j ck@Ql|ka:`i*^vFaPI;j8 QTX&\9$<ˣ*[IoR2fZPWVϘcr,K0JJ"f(JƬA5̹Ȝp.EH.9msEΡd (lx*` 5EH1lu:U%*\!cb*2%CfCPP`60J"Y!9T0xHS2 sYf@tb2h,q,> vy9<+f@57.WR̨Z! c$\M{'U015 ZǁmZIԒK/(yi80JJ'.  -*=``j1pLqΠ&acR (ЃcF>@ $JPyE(}R(T)VʢmLmp_ 97VTlh ^:e,XN~T} yr`ƝfE <-`9%^VB0NF+*C{V;ͨ1rI"WŪR盉`ebBu"JI-bJ$"0@E)`$Ti@ui,? tj+";g'ۭ^m#uq؋4'Qu3NQMa[N`:W)]mg9zx$/vx%UcE9I 0LHl(`_VjCJVyGJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%U!IyNJ l(Zpg^J zJ rH DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@zJ AI }'ul@ L|J6I @xN@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H r@8Ý;u>J (}J X;^H@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJUV2ym5mOo/ǛチNf$-맋@Hg$\Jpkݹ`|QkH nw$m;@j`^P" bbQ5+ Q6&P&L_hy_MR Z3./`O?52g֜Mn溳IӰVrf_`aAwz:2} Z4&2G y~ Gm@o$K@wD%O9h4^K%VxT:ݶL3[/fsK/\h-j ta=c4տ橥\dnzu߶ e?[otH%?jbWe|&wxV'[!?o'mXWI)c7vMjNB)\dpB$l[{6{>J!a ዄmQ!hztQ~X\M釿۟b[,u')/~wGρ_]u]ūP / a;jP):YzML}7Ow4eQ]4m:3!ٔʹB[h@Ih4G3]25Mu&=.gˢUXɔwer^\fJͅKse]UaRK a$k12!xߚ Bᾁ\(Mf\:c&JS\b.0˽µ} U&gP2iꙤlO<:OWeņ1YqMm[$ .EUWGb h o-^,9y`1m[ *"u d J?>N߷̞F377 !:nS'W}\.oӣx78=g9d9rGlvۡLYY j6חTf:,v1Dѱ#FKT1r=ꂅ0<ɪ7;es['8:rSGۄn֌?:|?|xmW7ZHv+m|1Ο.IY m,hDe>Y62T>ϧvo~/>̏U#qm?پہv|M$\|j3XAn]a>慡oZbvGqo/֗F >ɇVi׎Sݦ/Mwkt+ |qzy_^=YOkXY^߬JZ~?ҮSUv A-5c/rׂN{<=З ?=m9ȇd of.x';vAASY*{@E,{ܲ/hk )po,컿_ʖã]NҫLX;IP~ 6sۄNu1s+ gʔ6G|7{A6{ܷ[0S1|b:~ڤW_֥oý~Yh| ooygt6"ܖV7Zmk.ݾh#OZweݬBBns000D]m><9_=}67+;\m͞r?v-]y:ʼnvG۸D/ᠿ1x-Ƃ0Ny-Jљt'ۖt),8x]6Zeo2Yxu}-bo]gYw 2uF8>;lk.vH[.l:zYu Y %aujP{CӚ O;7 'WAnr#qẌ́Y;~\wh:g~}RپÿGhT IV{|K!JNJ&z ۘeȔMUhb9L^ZI%,sLDSF p: SV#g;Mi{y1+܊KP2L@/aȰ%8[Cl(UN-opl╜ m2T8 EAZ}Oɱ),dC,}d$P :ePrN)JN.-(z+Gg [f0A+s33XY҄`5 #hr'+d eeu<9:P7P\9"%2nDM ʉkq+Ҫ$ws/.j8)fm3޲@nJnȐR}.^<ƫ\CzeoVm xO$J|^?oBg\e['Q)ܭuW-eIBj*lwYίW::PH#/8έw1Y3B{6iBӹ[Kڢ siw\o?Ul/jqv0};Wדt&|?m0 )DLbo@O$Yo<=j\\O^N.>\ ?ǿzU{3 4El>5Z]Ո6hT*I\MDr7bu?*cٟd-xi#pHsJ[\eΑ2E8d,Z!{Lq#*nIݎYfXw!>o5nU1w cN[)Ltƶ}W`è=CF~L*". !cDiIyJ(>q{8NHsh([No(Hǧݼgz6J/qp;%1 c.:!*.#J!xC@@(LgEj߻_ٗQ@<(9$TFi¬@:CcHjeD@ѲNu XƯ̆ 1Ee'`'T ɺTD ZRm9Y&RT|j~ubK/&>ܴY fhmEk[wj OKm/{U=[=" ȁx:F0<1 Y&jf/|/E Uq6Y7A\}\*X$" k2 ڒ9;T݈R" GBBzY~NnWŇэbv ^nz3Msɟ]Єh8z0NfE,`2lS$46+ Qyfash బ ):C")IL,%C(dDDjpec="3f"hZI0RB[.%BY,Wd౎ zN9m9lLEw+ZU/b;ḋ[Mkۣ"pX|~/Rkt"FG6{ôwN .QGڏ[u+XY]Y"Z&!SJ8 Lxˤ6u3 fّY=S[ o۶\}n2.N;^˝;kR՜bQ%2r2xF,g ++[*t8X68'7=,֠V˜u¶ގJ'b 9b4R {r{ sqmg8Yj\ߊj=E9NDߵ/6GOj$=:IXBF^a! 6) (ush7۰.M>s M:wq.9KF#s(R jÅAcEJL:U6uxu}y,9>#s|V7k)DDLHQ4`A5D T>[ΆZPƣ/_Y4HqS itT\BНj3egbx1^H XT#5jBPKB ʻ((XHO4*,6ʌ ;6`#*$1@T!@[M+ǣ$6wϖ`u)WUf,ȲE L 4-ib&R>@h>?{UHjr{ O^Uã7fx|j^vr;v|;( E @ Kd68%t}Gi,*CmW2:;#@T6b ?JBw{+I ZN'`GAQ #̧#%˺:|b⑉bo&J\^L (jC$QzPm}{70=lIڶM׍nw/b>I-;NnX,r8K*IAWI_0`Ajq2ޡٰ6^L w sNk4,PnEocJ)*4QEEsf8,7* DI}=T/ZR]8\/x6@9PUt'#ޖ'-F3-A\*3☧<G&QY.]1V% t}*n<|cɒҢڴj,Y )vujDRIvVz/oAΕe.!r9gB=bR9c6([8S);艴#`\L;}s6%)H Ǽι`1X>_PEmI 9Zr;O0;%i%igiT}y¥0>H\jO\ y7v5EM>rhvry79MV._2t(1|}>lx ߇7te_"|]Nش:8}68Fg;Π_σ]f5稪6]jN.SkEiRR U*r;HO7sܣASQ=ʫ$~▱((c`TŜ9s˭2tIvꔞe``ש(wp)Bt*ƌƁs2Bނ9+j$ӳԳ1+и[2n_'R Y_&m$%ą(Dȵ1O%&zG]"";z:^j׀vmɿ~ ̿6/l;M4-k_ӚgBeP\ͫ%*Wqˉ97+|>WiYe:Q;i:Ѻ4ez#e|l`eC[ ǃ!R1FЊuLh(xV2j $qNCrE؄ A2 ǨU1 QYJyr-1kYˁxaQ^_ ;L]V:lX^"r!kX,nwu=&ְ- =+.7*8bqS`~9bi=+kmS4* > wT%mfl7Eyi]@4:&ḛ4Hc  B' L}i//={p>MƓ+yjba,g7qUPFՄxuTi`gX֡q=t?hڄX`D8Lj'R20I!j́R8s:⎐%\V2-l:o)Fذ`4GD:IQV! _}YpYrQY>&FҸQ037%/ ݬ8CXOpUqR(VU i엟gfa'?gVyg(DV|}Ȗ *RRv1 Ktp&vp}PL^w@0! H ˥CRRQ-װn}wZ_9M>l|f`>^Ï:4%)[vJX)S Q+r%t6q 90>Q9խyr1¯ͫŏWɼbvY ?yT߬휓(o&o0R"]^| ޠդ?J 3ve&* cCK8ͬJV?z4 Dpfy!_ىq`.e{}'%1Y.|7;{Uk`.>eƠOWۧz2㔋I)/\!8^&-7?ԋ9f2kr ʧClsY04\`D$/}ݔ2p>mNB'!C 3.ZP15vX0F1[f b v3O+7[,0͍=c7RKR4xę;o$b"$PI!3*p:ՠ- HpB5' Ebp0D)6߫{ `fe``L&`6)''G8 8jʒϦ<<@3,Q?*L,Q%|KxPw?lgQJ'lv0G p%O/'_ ?Pq,_|}{y람3V|L~r9r%bmVkR'cE)tdhU%E5âr`A:P~n7$")nsʕ{d(9`M'JeК`imn"j FQ&H0 Lk=v$G,O>gуjmF9DŽRn&x9` 1rbrX+/B"۶[n8t< )ZSk]~rB]FNq|]U+Oc/jr+ \de.^˟7sZbgB:ZV]mg usr ̂HB9U R˽)vֶ.vTTL[(&z*Z^:_m\:zyjd\s4W'q޶R.d--pRL@Gm7`̙֌]3nR [BՅ/Fta3]8(m򘢘5iYw;MKOx 4v+-F&xMMhTH'K!#a:0- 9ɖ-guz5DmHXBg !00 A|@N@`%D Tƴ!a+N:{E2FHHubĴMdu4( H\kCj:I$}JYKb-p&PQ*FqI羔GMEK.H0cD(Lـs)Jrz E-)œs+2M=Lv {Vx?w-y sZD)3ln%9YԽD(>Y"$&Z#Awyl0:%ǖ`#B:/pAw"ZNR}!\k6<g>LW-1CFh=0 m&yjGG| z-Om].Dy qsm˃w4j+"W(E=0 LEw6C)PQeM&ޖŀ9(q2yG,P"ӹT8!1G"aV&Ik VF=AsbOX ?v3|%gS4a5},vLLsp.X̱|2z4s#Vqύ/urX;p+Js,&Z ,4+KW,9Cd;œ=]0-6|d^N+g+3fR1&PۿgbnW5*eoB)TRi,5^6:I"X2BT7U r ;umݤ*@Ṕ뚁cޗ3Jy LՙԯtB Uϭ] h^{wifMrԚІ674ђ@&3LZZ;͆@3M+ W?fÙ栅SD&E+\AlDW$ܯyhZ(c7<%>ևH߼&bM:za뗛+]Im"`wdOu1ަgtwmM:y.Ւ-Zu甉3cn$L ѓ(.vJ`M}Ԛ |t #923B!mcIܔATcIFJRo5 sovG&Yy[[W6SVttnt{wDn$mjNu9x=a%/6A/ bʳOsxz;/j vwun} MfوkiW|u([c!fH ^ wE%0ZKK!Ax$cx eEH||105Pru$&7ɵܨ7̍Se#2U:&"qX,NDg-R800_FWSb ];J3cM4)};9cK+Ǜm6^{l[xMwV-3sw ZGK!^a4j|sWLR+IXXW?(ߋd֐OJe:mhK#.r Ü!jLD^]drh΍29J0#5#^>!0>mѷW%vB{y:wB03ϑEYr#JhE k1%TJ;#2RFšX$`[͒C‚0!S&aJT(K)µDXGg-gɵFNC9Pm]&Jz.`ˣĹf9؀Fmo^,˓#l&Oݘ]di<>=h"'H^Ykǘ"/$y,53τ4.E Rr VTD9r 3b*r@1  Ay*'As`.z%Ҵh) +96aX6˙8: vcF5!s XEhJ J l,(?gP"Ų};z3rrHX)/'qQb[ա %t>Q;>{33&gO4};+_O!F|Cnu0[%`&|o~ bw{W-Ջp_'2zRTn]Ouݐn8T-aeQϳ/&1할ͽYkX-1P^< d$ú}:8bIm\ u2h0{nf)}ۛ7vC!m_J}aT}%9ˌ8s& ~^ CM-?C Ǚ=܁| J~>͛ހR5ˤRGB ܉N|>+]gtuMkеmz:Mmu5u_Mʫ_E:t3z`Qh.&\E7/6G:I}NgAaҚM*`tn [RL2ismiO1pn&/<FsD[SFsOE."9|$8G r3LJٷ6蹁xI53TWNkKC?kw~.G;'f0A5vUt%C>+c`X" kq򀯉J]1Nh5O˵Gj[H 3*g XD݅v`.4Bn\av;, -XDa"{&CDT8)RRp&DpCLDK9 Q ƑՁN O+IU*?W|Gj" ? Wb=-TqM' r N@%cNǨ)I[x0GF@pIm'|% l_}eK8S|b>Ycv#c(g5o*DW)N]|zvz]W[uPW^! wFI +" 0#o0nSуly·KWZ<rc͕3&GAK 8"0{<-)q6b-om7WB!+48Lu`7nAXg aK]eC@N|uo|5 r[vfVk"RjNE2>*|TpKKhWU_M5Ed&S_QW IEгoDEʷQ Fy`ܰLF36wV1pz"lOemIڒm52h칬>ZKO,< 6"W"xr9x6 Vzo0Śx5z͑F;-8xBDm5 -imNybQvȑXwumHynOXib<,04/ ?(VRr~R-˖ԑ(ew'lVNxJH([7'.d},+e dh2mP*pYkC戆Jq&r@5qB7O,ܣ=*M9Fzmqϙ5| 8в^;cX2aŕLuaH<40%#$&445f [4ae2Z;lo]FYyg:t'X#!ꐽi|\H?mhB!/>s>Ys )ȍ.7J&!XfD&ݝG-XZ8<(d>_bfvy?[>SZ$g9IɧUYȫE cL3љ "G)iF]Cdm<>hp| ;r,dkmFĤCxRU‚,qDQRsr1`T2аgQF{C:,"\뤤-I' F$ p`f!ƪ/,O% ڇj?>Gnnw];?yrh%j\43_]^_I x]tjPKfz4>9P}gs!lewݿ}Ƈ;o=zh==_q}|d'v~*y֨%{X}^]iA/䗑"+6b asiWtZvjNQI\`WsZ`)jx)V+tݬiUk0ǟbR2XtMk, hZE'1x:p{PPqShu*9B?칋P *o`\΀\F饊^zʥEDNŒfo`ylèc٪x?%5_]|55rW/ߏ'<2 g'䍮TtM7kur~I(:7-o4q5`zN XXV:; _:F-")bȊ (-Fgub"ܚ}vo&~_z)=!)U<9u}^sJJtb r(hZ4.?'ٽK !Y 'Ξ2W={d zg䦰5)e-Le2 I2U,D G+Qim>#c]]qSXI*9hfHCIFs+Фcxo7|hOuN8(u&mNdZXTd";%{J%GɍTYLcY^H`#A\ficr䷚8==ݟoNWB fV^ iYCf՞myfnϫR3LoU({mfZ=, "dQPYPu+*%+69.*@^` L'ʁ眵9/Yo@˕+jkM6˘b}l7#d5Wt($fE\u5͊ Y҉;mfQq}.O|)9?t6)_V[խ¼\قuUftulZjk,]ʹ1-YD!O"fǑT+h>cJ#mrEKry+|>1r笭anu}4aa\.+$M#m΍2E ;=wo uzPôDهZKg?bW77#~_W]V[> nKJt'%̾LVIlgŦ~A!*̆-fԛA 1J2r+;pZyNZ`L1xl+̧L-_> )ݽJa:ޔ I|}bQ/{4sTQ1/[zzŠ5*++^wtcڱ^5꧶n ؕ/dݶ+:6z=hk:VnHVn+>Z^m8_٧uUr\lь;gՐc@1+~;H"dmxftIЎ UîGMćΨM짼m%+e^&m忲=N}:y]?vJuw2z_z_u{h|ڿ@<P̋Z'SJ wOaN;ZDLytL,X(+x@Ctרd[+67,ߨCƄ{9/:F3cښ+M72 PTB q&D )'WiJB Oed\ZJ8 ଵ!sDC_b"vڪsi۶B"G9o1cg ?2Vv:=yy]>Z:^'0v-NEZ·TvuzN^Ts: nnGT:d4ݦ06.ȳ*<|$0*9g"^μQggOC)+a-AX *i BUIɧUYR(A` с xf1:Ad) XcVodVkc0:][qly$߬V0,ft-W}IE/N-!N^-53_ps/vWB.tXTZ7IgPv}537Z_|wIl毝5z滒 RI%萾ғk9w1&~H_l<NZ&frx48a5\ҷO;>owr8I&΍D+hY1kH9q> 2'm%w;uT >DV ,Hƣ aN3:YxHLM(n^ Sn}87NdBcJ2ZhO$Nȵ9zoH^YQ۷ގ7̠ؤ vxU5c7wh03*)a몣.wd)mB$|7K[>P WI_ q5JbAY9OZWQm$\6~E.;ѿyEW?Yp*VOB4#y|[q^ ￰͐޾<3_on].x RNޑj <+-saU.#RZ{b*'A%95S52颫t 8Fɕ0:z>XD"!os)*x!T 0F+HYYW3yMzbمDZeOK|pXdQ+| G9'2~*L-cwg*E#:T88J|$RؐkSFi7I >WomkĶ0=Kd)34LK4(e2 bJ*LH$Q4Tll"P)c$W8q6;T( ()x4blY?{G0ѯLr8ϔt}K+8vDC,v[ʼno4 ,;6=[oH@(XCPd@3T*dA;b5d! >x!B%43KeIGW22$ϕfNrД[£SP"EADD X32UDZ_q} +;nڜeԋ@u%9B&iP4NTCYD-MKjo ?|jiFxLz!hDdQ?8K, (Q% {dsTVv)"V8|MQJ QkGjDA HHPTV.„ةȲ}]T٫yuxg}_㯙){a{9oBr8O ]*(~y_<sovm?;V? bpF6v/q^{ϝ%C2yQ@vLFvvGή@vfxzg*3$Z\qzBD鸒\LLRqdQ&Ry6!zt_cJ~m?LO8-dVh܁Z jVW?4}G?q#dӨ*٨ü2>|aܦNVvnǛnMk3ETsoi:I?zMݫIB*Dp2G/l ɻHz"peCNIC Xk$a]Z|kJk8 F($eB`d9DN:YHPOE}qG%ƾ|siς]ŷgKsϋo޺2}.\YD$EP َ{s4 IaܙD"c*H^[Dx'흔U0F=WwCazDW2$HQYfR6R-ܘO㡟(q)ܛKf?v||KEEJI899\/Q4NScp :g(瑲DB0*QHk1_Mkge1Ӥ\,T3I>X_t`kU}r LǠ D!3E'J"T0ј+%C [V9Qd=*Q(Q$ -GPnZDpJK7Z-EfWf^ry]xx,dyU"uN4iȐâlgqGnu7W)s )k7N??6}mL+oqqyqvhGt$uԒ@a;Qf(ơfZs$Z^{-THƱ`c)q")R*MJkbl֌QAta1Q) wRkuvpFQ~}Z67OESC?izT׻}uw5Oie:JVE  RbD \QKhH!60.6AhI34@IȍL>&.SÜqy,Zw쪵YaZ"* wEWmQ Hdib.B hnT֖Q=s6KVehJT (8P$,u,H.8yQRQ\qVQs2ְ}JZ+iSNZ (BuojU9Ȥ`EJt`%ɩy V#z>b YIjG5%vѹ{Yg%Ð.tW.3/ 3V.9ӂ Έ6O'sj:LǞM[iKyS:]Boh*XTV!a>XĠfUTTZT˃q.MFL7D3EQe8+ ?^15gٻHrWy4 #[K ڂU[>dYSQ>+mT,wЇ,=~/aSɁ5r`%PJ|X92P`qY,bB6EK 9R!Ok#5pЎdFE{s-$Q?nn:0fT@Xf.؃VÅw48_c 93}aB\.SIg7xл1zNziv)>4__|Y˯Xj`s䩧 #`Vtو7G5~N#t8|gMՋm HO˟wc=}͗Y]3"9:,n6-G\.g'9s(W o ʧ=b,?4LkDzW4SmP2拴}rbg '} 90L|3jcrRࣦ{ ;O"yNY 2LE ?y ĽaD sri-|xt)_/~_@i8qCZy.L⽥bv k&:&?6o9mkrC¸@Ёb+j7NG6\N6cg+Q:GVZy7Q=5wAWZ} ȊSDE('h9t[cz݋!v%9f 1&KL֢đjf(fâ}j( -sb\H9e:"re8nVTLPms#-wH а բ: No+UUP\TBT'Hho ʧl~D1X@yk` )Rdc_ظAT)^ٱ.ȈAJD\*oW0%Pl2)8I*޶r(Φ|EBaByrWK%2P잦v; '#N'gJ\c Kh1mqŸ * ccLC* $TŶ lH!7b8Ykf_M,IJ#5X=9uuV[ؐ W/z᭑H?yy̟-iO>_p?N1z!@A!U;<$a&dy\e(:%<_] VQ:J`g(dMVqT <[m\rJڥ%]ׁӃPkS5kgrn.S3#[H1:Uʆ*GhUT5 g;n CG[9䟎)f冸b,Y"K#W{}|C?1|"m`e>%,_.,{k&qiBAV:j#)WTlɐ+\XcJThO_?J@qC&erFk³R~[&y仏W?7ڃ6{jKC9{^gzۯ3yioԔ}u!l8SOљ#)]CTОP :R.=Rm0K <20|RV e՗B8ׅ~־iApbeց b̒h+û: dMݘc*!!ܐ 1rV18|MpR7{/i, %s,J$ġ͕ jLι{<1;i!O9H sqP6 ZxmZ~A$ne;o[*[�sv+e=/.>}-wmhOѝI>Τ9}ݲ܏㙹7kNوAܓí sdRH@TrL)h֦RVmkѱl7gkq@-:C,|ө~ _z!.h%u*g'>+ŒvFI)fh &M&Ds߀T+r>i/!KpLV~vT83CܠNI>k@t~=Sɽnv5κ}R79y!}ҪՃ>aYshtE&ofᵄFrpdJ-Ep5]{L+S ;y1ItJ\IGU :΢RSRm'*j#3erXjHD.T+DdQk g-BBF]W]x7l7-gz~p)x{wi^$S CLN.7\cTH2F*iT$(zb[UV1$RBFuؖlbḾjjz$M(2`j 0_3W:knٮeY .Ekwj֌Z{`{<ƜH 8m$" 4$#C>HȧĻ1Q}l>Ԉc1H+$bVŲSRI={)g ![|3"mDڈ}"Pm}Zo35>şn;[ݾgeܬtJ ~EPwvm3.R22(H|h6cc1Y%%~.zVo@|R!諳1H| fT"ƂKȵXC$v6yQܗS6̅]5gkUAU(mcP9_*SEJmTktx5B}mwGٚ5' *pY>Ys !@IӁo \<|ymrV8Ҍ> ]_^[rٗ+0U`1P$ɶB(&'o`H`}l`|v"N\RL4kNS@Sa֊;Cb@zɒ3 KT)d 2HAM;e0UdHٻr#+|s%@bSz{"'2 Vb"[-}$n-,RE&TSn$sk/s)J}wj:s!׻| Ϡ_S₺w-Uf/xW._׼;=tHu:gR .΋R} Nvgπd><C@lL11~~?i%jn>QjI&#ߪ//x}k T|էI ط Dg?{z4;H?#cd7|M7^×F25W1 o{D=XzYGyusuqҷrF.{Ҷ*arfsX(6qpv;9xR䏕N 玨Vgxa8/|~?l? We*x쟜Qݎn&'WuުkP+h契WpuwIO$?/\HN~_~%OܧPٖ&M,ͫxjzVf{6y{9o 1C9?Ǿe{?d>+[` i$I)zVV,k1R'lR((#jJB7M7݆wwƪrVX Fhq9 u.5DVo8vSPg%ژ2FnSo99zp`5$4W/e:!\2ba-*q*̹8vb˜m̹?s9roG_+_,Q$atHXN~J5F{X\ڝT))hw!@hEyBށ;Y2BU&st:%/g1nhiNΗc1ь;RK y_5(zFet1{ό,o&f\O *Yl=.E3:H%DIYWG)FƖPp;in hT?#%Lꌿ`':7~\ퟲ{`o((39&tv&j`|+?j**ڤteΝӁ~J9l!7T4#%Bw8sZ29ώ?OIGмEI Z(`Fb Xwv5BwArXPGa=F,v̮z o^uG#W6 שg\m.^9A!H|GJwTo)MJs8^S+7 dG%gGݛW{ʧ`eJ͞xl&c~D'sZ#=|N{g^F2ϊnz >tOso-1^7ȭm%f+QHc>\&p/pw Ģ2(aF/L (0H H_(=F};[Y#-; ;US!CEtY)i#%!JˎR{ T6;n# 錇> 6IVe\謏PFQ.vT"WYҧ~/ ОgT W`$ ^ΘOq a|- S0 ūW Y7@757Ykˑ}#Cx;Kb7oӺ'R\̻X( xzh&66~fl1de+̙Xkr AEB2 B6r~J(URCvu劵AZRI*{edery"7V%$|jfpCIl(rR7Dd-Xq Umv3EqAx=LU!%V %FnQJsR̭6s;jFWa6.\$9szoh,%3 dB+ _,Ą.c ]9-EKBf<#Җ~<d6J$lF [kS:03O3iFKLĐ)6jN߿?uY%|@T&ъ@H)( %Š uj! co͸Ϝv O . Ch$274,CuYqTqtpw,L|ĖL|;57bqX61\+ׅy#sJry:gׄВC`YVoGl{%%sxk c|Lb^s &$Fc7*ϣ7 BA($S3MzcKL G}5[XT̯_}8T9`S@$Zn܁rY_ZOB-kt;Q7b+DΓE.z;#/d;f48PJʑ_*aoZg)0-%eI0 DƇ,- Kg 1oe.RoC`i uj+BcJ0I+ FDTSa\nW˂=WMXڗ&ޥzJr|Or4q9PE^"x|s1%-:I _l˄UכOLMF}$l?_>/*.<_fU+qGOWWWJ-uyw83r_r1P8/ڰxߔa(ɜHOi)T8zvX28#.`(T`{"!/}W̰_on|2K#;9wLhtC@fBћUsϥ?}w˳>rgf~5<&!Ѵ~[E_pMz|w/g|VTL͢$O@MO%$?N):1Z6`7Gi7B&6z*OdVX )^UliD)V6l>&dyl  ׻e8,oAD{yV{iqTcX-Q-^_`͠&_Vg4/שrM-ۊ|2 pȳ,t]}8a FFLnL+TzݫŶSMB-5luvk\) 1rBJgGFu['TM#?WTƙ>;OLY ,B},?^#Q,;7L7FP˫@}='-}AҴbV.cCg&mg}5c:W7uQDZBଋ;mf]hN+gn7mʉa(p 6d+$ڬEt3E`C;ʢd"U\@R:f{[x,v 7LƎ%3[hz TCtkQ<!ѥ]q4.ل"9~XMWD :TDx2T˻?<(ݧvI_ޘDu iYsRF!9-(-dK:EřD-e£[#}x|s=K lyTK)Fy2vu! a4zi*ȟ}oO Ue5ΟWW 8Z*cR@i H70:t-bxvSZ8c(:BvcaZ9|stnx` $W!5+HyAǠB6.Sr:2]D}޼= (0WcEG׆|/{OfFGqpY^}AD"t7W E^ECWTVh:]tN1|bF}Ky./x8ݐy 1!F1FHφ#4Uv|bA[V Y`"<ȝJ|V{|DC|ǭԠ,\ˮJ _&]WTEvE)͞W)*] BֺwZ)c* $UȊRTP,`E&TW U轍T,A~/dESI$sCɞΧa\7^~k&~o.1)}.)譣 8 CNxS yOs B >H.G͋Gk&If]+D(:9^AM.i1I{qo ذ=)+vοP! 1݃%Orc~r(ʁNWH0z;OK=D2rNwEEp pK|WɥY9=HE$^$0(E Izi Т Z ¢1K4Q*GM>'QC}O)P ߩDo:bQ!XR,Х!flΥy Q.MάځG!g Jy~ρ!:B}lb69$6Ny\*Nce9ﴎn}(Y#@RBTP[HPx@ 9$BcB&iYcJMl('\prN)HdHl0P-ؠLcL-k@",yo]͙Jw[t=pΈ8{n"~u oͬp1Y9 V9nm&?Kx 0u7oݐUs;Ǖq.wnz|߸祖rMw7pWݤ󗓲Yg{/:Ts1Eo:8 )|:~OGj6͝D"}BɗNa$_VqIJ%K+8*%|/v*8:`Üwۯ bg+tP:댌ia)'qwo1ŝ$SLq'YSDsJRdc@OK>{_<*.$r xEI:zd!Id Jcxpƃ7JdtiSl&654g1=LVoZKy[~{ojhkl7]W9䵽j,W}g+f b@5ixѧj{W/߻;Ph;EuIixĸIyM|k{n:u Q*RpD @dl(Q0jϐ9R"eˀ&'6h2d|=_q|L!@BRVe^Ƥ !^#flFu ="8)_. KQK"vlj祏xV\dK.)T}ks׋njxѢ4:EX} s(b%U&0"eҥ B43/VieHȚE&Ԅ(+TEŲ$QCrΘY3q]lQ oJݯ`KۘZ 6+gC- )dSJ^>$64@KV_Xъ>7!˼#J,QhCG)SzԊ7cR=Io]*Uxҁh%YOLŀ4^$@jVh``T9-d8tbˑ%R|9ތ٦dꌠ 1ІPn 3/i(-UuX,d-׆roC0 +h8'gT%ٔv.>.H`Kd*.ﲴC7_2O=X|Zk~(pD42.AyT2*U|HwUO4「`"NPt,C:_@YLv"y9*54/8s=gcB'߅h3(3~Hu@OhP)-FΎNYFO޻4Wt4.FS"~|a+ǻn PdBEWm2R`vHJP#iTJ9 4 Q6D'4K(iHNv 4R[V[\.it*Ř@UcQ']7 ǹtjYͯ|u570^SL] H0"?4{7Z7|]"NѬBE*@*-i$78UIȓY(5əs]mY6΅)ˆ]qAЁ!{#X#?\,jg/yTrsZ{{sxpG%Ism<.o)H%c:J35QxŨ-t޼pK^A5bm@uh|zC\iysVgT}skK͘9) ̂.䦓xC*&|jRDä,gQc2ֳ"Z흀 ~粽x~_kazzWvֈ|,  60 SŽ9Y8lXo=6M$?ܳZ@- 4[ncn;cը'l<ㆉfx:asgNY`3%IE_ӕԿlxv9@P EݎᄷP@li7S-!I *2L[-e=MÒvBP]ϑ-1'OI]1سbcN@/R6:r+B>#cuw2h qA:w=.j?{)zyTB;URWB\S{PVDSeDT.j[9"*˽_{O^I^QOV4 ʠ#Hf$[j1rsQhb"A")j!-"uւ֔z]ІH \TO$>i@"n[1rv?<6j-Zz偯ـՆY6u(nR1T ~`#ӭƱCGAGзWC\bd9@BTӦ"{^Gw aZUEVq 2Jk>ƭ@K@>=؄0zsz)r'pQ[i)CA B߮cH;Ahܼ'=\aoy|@b9jǶW?y7~> [M[m};3﯉L>&Ti8O S]/Dw"*R*p=B̩ըy#.`+c3X-Qr%V()5rn CAsC`VB-l7 m+~s}}aA ztT-:;X3=gj~TWq\} hͅIW!U9Φ]ˇixw@oWvvvs0vqcFs*G+ff +f Sɩ @'hqpfME?kj/N0Vǽ7MA>Ar_[ekH^WM!G.4zS/7 Q` ʴMG{?({oő̡:8:aAawJIj<^cT7q`ٿZh9rUQ FbZrA2G\<=9sP3k[bY\fSK|F4#oG{?|-йE\FᔳhBQÙͣh'|'q{o陦{yGmBu\zE`#1r7[ڂyZۅ Qox;~}D/  iICL\m: uEfBOyg ]?ٿ7|:nS/KzA{=u7vmn@+~0_\=qV'پ˻O~6Cf_ݎn&~[4TG@nxT#=߿F! *?/???/gT?ϟ~ M_𽌣m$L|~}Ԧ8lj|%|2oGn|^y#v|w%᡹[LwfF%QA$ IR!z1$" hNՖ#VP hM:M:'鵣 3R5#'^{TZ#|~'IV4"SF!)#TFDAr4T2»M":'UsOYstqD}7^3}Dȿkzl9r -n [( P`K\iuOW )&!RWi"uT$%LOULw+yhH Z~'BYϬe$jcҢ>Nz긣} ses_ovԉzٻg sԎ׮Y/Eٞ+hdް!qobΦ iv;H`,PB)+Op)z{#eyOZ,y,Zl/ӻbNs&[*߄a}ټydz(Ȅ ԋJs+(2J!8+ts'{xdCq#;)@#e26*:BJn5z3V\ #H(]!(փ1$˵MS$9t2&?LIE ƈَQCWKżC]#޶u:`Y# &0* Xɕ\*dj:H%U,x @2 G|s?˗*&aMg| 8$ Qr"EeIwH#@tN֣,uDwW̻_y<X]F>g&ǫAb1J&ijb Sp[ od-M.6YCՏ^ nz5:LuT(aQ$XJq(a4+ Qytmd 2dFe&H;Ƀb& 6qDҥK]caXp.OEk}6+Y;3pjf؄$B1)G'$2VfX@g$!9 EaRY4zȈ )G/ڣp x"(r\+XЯ$ꨬ-]6N,8+SшӈF< HB9!qB-rEHJ2UhO4FЯZMRE5є8QpMYg\p^( - ,>ύe kbpey0 ):qɾz֋N/_yb^c]oHWŅv0`p3.QH^IN]~$%Ki+eQ6&d*VW-SuN=?W[dy F\2FN(9d#gr ;lL}FF*9%C6+FJjrk:v=%Kش +M5\Uz*؂K/ /?~xȘzhD{5DRs#hswN,FNxM~]ဏP=GmU5Mm/(X/R %\̮;߼wHqnmqEnTU'yS?&})7 Jh}[ۭ:ﵲLaemZd21`$VKQc`pr50,8(eTpVCL~PX )#z2*k+3wutPWP]^Ck63ST(__Qa3 f;YI&Ĩ ng$00(͸ m3 `Z2 B3R|JtzZ>;C]?P\n7ok.›ὧt^:3dJh n/S`^铨b/;Ų}d]LJ.LTtz+U2YCb/GkHP޷B.-MZ3OOi:#&"&EOUQiВ!KcHDXf'RA&m$uPFk!(:r6ye`=o irJS#4t9[l@Gڧ^vxh_F Z?ۣ ^=5Tl=F:]pުP0E&e9ParnZD2fvb$DT3˓A. L%ȩzo"'if&K /A੉P7NI\  Ԉ\HDn Z[U,!$R8qw,gեد#VE8bde<P>A41QȽRDKRQ[#$ ŅCHe-zJ6MDF^RPVuҐ:Za1O:.~wy?{/%%ڰb*b4T@ M8XdhW*:(NG:Lw/t;XG[11sI`ZYs 8odJZI5 ZĽ5@ - $Aں m73(s?\ԎE&|[GҊ=:;F"Q)x "!oKZGn+IDS@Wqs.AE`j>jKS$v]Q3rl-$?@pRP X xm-P׳熈o_zٲ4|h6),>ceg+5y."QiIz\9V5! c9GùĐȥ2yA!*ZGp Xq6y>LGfLq(T)X:#i ~g9MΝ|jɆuvb,YZC޳Z-צܻ\/)̻H,QhH)8'Q4 n=x$=|J>JQS f2Z|(n&oEqߵ'.,ƴ=CIn.vNigZb׆Vo_b7%Tڲ-Lm٢5sНB 2xLyt(#nR𠢷.aMrs:+mDž1L}\VPQ_UԌRJE@ͨW)̾(%]RAua\WWY!}s0_h^>ŇXL|<л>gMY_dȷ4nltX{+Qp1k{mz*.l\> ̚.f^ "qbINOfڨ5g2WtgJ:+RC. ( A@ ^RA /saWv߭-m0SgsL>9S}60?rYA2A0<` m# ,t{2QajamWf'ҧ.5l āyȎm:.t3iAI !m:8>|-?0|Ϳ-%1 z?21s&1럸1'k;̟ýY j׏&-ԌzW=0 osǜ2̫[d8vEGa58BSFX*5p ިDQIe.|hb4Zn:LBBoOmT:kկg8vQ~¦^)oST# B<~(Txt6R^-R\; B:cJ!,=Wz%04lȽ ~蟽Do}%-G[HT$J1K,h`SFEL1V"rho9 \xŖ6Cr- gNꑴe6ꓵPO&~:Ce`tt,t|z'}m2DL DoY+3V go/ͫq+=/ض98~3z&}W Y2>T}Pkqzp)<#yZEPu̧cX©*:R*kYHAH[^ŏv 20""[z-B,|kjԼLX:}SY\VJg)Y?bSTģP%F'jڃ6EpiԚ:\h8AxTs.A!8VSQp_ȔBg9ytorg65~]-˲qGwwumjp)P,ASݯ|Q }ēF㽇H)'>!sIk0#亇fVKkL#JAht5lp"+d@ѧTDCAe-;/{AcPȸW0:+>GVvla݆qټTt4H -c1J ($jxYҊ:/? o'a2MTFJd%CĒdD j|8GVA8GՑA]K慓xCD"mYt(X9$q5Sɇ3)w]?!*ȼ:vrY?xSŽFK%L/VW/ZC*(*\_~/>IĕEʣ& )3T C 3: ĉxWs$sk95aAaJ+>| 9mٜ[k'R@x*/u rŤXoqF}} ,2we" }tᶸvFTv^ˋŧs%:wٻ6r$WzpVoAoM+ba#-aTmƘfo׳~\uT=xqz9&0M %\.|e?iY%oNp4r[0w9ƛzlcO\jS7tc7ˍ4 ,J?3.>=YٿO NW:^7U2N@ܱhs'7J6.9Curl}PDUax՟$.in[ܘ 5~F гdcã6]A2#S 9V=y &F;#顽 s킪=GOZ#|^+0AC.' EQ*#X w:Eu*_13_D}s3 &85_ {ԤyoO2ǴWuKD`7&b_㯿L1 "qHcpI".X˙9`B,];zj-QC<q`x+tsN+e\yӘϳ.$˗:݄~*1DC%F(&h$Ɇ=XRP&V\x,zF\FS96np5 2؝q=c GoFoprQMH< P}*a vX.2^v;r[#e26*:BJn5x3VXFR$OVDŽ 1a=36p9IlJ$S F&1LDSET91"hhtCC#h1lI>A9uׁzY!ؙ u 5ob>E eVKy23JfuVy3Tt՟`fu#E|q/oȆZ8*\gx\HQ'JC%ʁRwH#cKDd f$&#غN2AIjmR$vㄹsp M8a)5rT'];u|KFEgeTjcP,D!3&ɅWI4ss%62Gڗȱ24_#E2>,H)DK=;X72D:RXK{q\h˱S7T?vv_|Y̯_@v0}'o\ckt QG`(!`A K5Nc%&pEXT^:] Wā b2jI;Ƀ&am&r# {K"gvahǢIǾZڴ&`,H$5 ğJbXFNeƮI.za Ȑp=,.5!b/i1gk+:*k bևQ?͊X4b14bOB#`8& $4w8%K(O(O$ZMRE5c8Qp Ϋ|YϸQFIl+fa>J,:ev'hYLJՋ^d^V(}!Șzj!p4jJ2(JTXAEx%*: CţKIǾPևb?}x"[d=U6??P:DnEo?R6(Z!e= B01⃡,  qrQǎ.iOy'ҌdH"1%Mw+ȭU-tD؞6i$zWmj&˓kr%aV8h&AG>`xjr rq2|+b*SѦưYW{2}}k\]*l`w{?T?oO{|vg4/~!sL:PB +M8??$V"WH޲T\!lq{e0I:_eѠSf༚tÄf?ab`}?G7j@===z&Nrm֕ZjΖV|ZV<@br  2z* 0S+̱L| baւ_W`I!A72r8j;A_8{!γN/Qŋ P۾9pIJlDPo+ 7bbK\2{ ޠCzS Ԇ%P$ 1fn tCZji TsԆ 匜nV/8bpe+fE>qPHA5J&h=d7H7J U5^7:n| Vx.ds+0bX#aCB`MSNJًe >C}Cgh'{<,iP4Ӡ,΁) dkMr+І>6rKιh:Ƅs NC6ڊU XlSwtL+LL0+N>׫6״P6ZsU!u"o!7>^dɁBJttvp|ojy"3p8;r$[}<unlϳ׏E@'ȴ>Uw/koXż0ԃH%:=He8`\B4QH)XYuv/aM\{U+dbi jnHbZ!xۮ9~p͗ENeRS:+}2@.(S9 }ʆ;: v-zJ!@0{R*թ+:^*S:ubA}L?uU'ZLN]=IuU* Y&oWBf\[z/ˉ;"@qX<n9ə}F B6=ϰ<9Jy?W-wiHH1a%IH 3\ThYbOOaPmXL-o~7 _[端?8[by`gQwGK85noȰB1 9oIٷY\6o8J:x>*^GJq&vC7.Ѫ ~_ςpꍆ:c3Lj=f(}aذ+;?&,w(`7}Xۭ{Lčn:{,,:_IŘ8!t;fr9?S+cҐTrzB*lN<%qTUTJѩ'W9.ܿ_x6veib`=;Q'JC)({RwH#cITR FxN-~6eylJ,7'%=HL꣏%*a\5OUj] w5B]!ᮐpWH+$ wB]!ᮐpWHs%>+QK|6x Hָcq$]箈sWĹ+q8wE"Ge+h&*,W~@}j:q$S@h r=|+;gx?`()ict s"TQ 1L`P"c~\"EX9Vs}6'ANNj73Jƈr;px9|y[Q#g7:${ S ~Lh#S-7>][o;+z,mxgyfY` a7x)ښȲ%q[l],)jI[lwT]*Zdv$ObCH\~ٞIm/ߨ>MoIrYfGR(bC#&5LZFΎ6xFNY]g`Y)?>v B[rs1Un|9PoY . ;h(( Z0,!%EhS:0(sJ?@`I;hBQHoѩw9 l#=ù"UIrdm籴Fv ^^2 _{U8TpO1)yWtNΆ `2zd\&ǵ);V8aR0))NEoS|^pv>q:|mq9N1l}wjx3pn*is,H+F 90ot޽*cT\_\"k1%s;LaH7*q EVBd;Yysw򖿎 ^_ۭ[-Z4b{fJjy_r涟Yj'zI gIh\Y:xfx0 Dˆw^mbJS&m eȖm:tbãҿtiSr+!NӭissjuaIE%FcJ:LTw5nXqI- Mwaі{.9ܓ`ABtI~r;+Ѱ@Q{-f4&&;фM쨾E1a9NXy]` m}틾+7qr 05v'-NZ9.@S@ʨ /0\zs=Sp-1kPҏVZIDSY̓퍦br.wֳS e46*fcAD%3sdui:7^vD:Tɕ_ Ɨ>, :_oa/.'k+.zowzO|q8di&+gjLWMiӍ,2GڄZ? U,Vz8x-qUY9SfS +wZ6z#7yj>oJ SوH3>2T߿)&Kٻ2AF)N`oIRzւ,sr ȒPz0Q$OhM9MwNsGfڥ)FNNN\Fki$>OF+i)tӜnSM*OYoq"~nL=ekgHa|%Em.%r -[L4;/Â`ve饜zB>f%B,Ӷ 40nH*;;n<2^iǴ6d}́PЁls9=} h bdkX|Vv4|ܥFwnI4\8 ÉdtXiP{!28O@b0CR脷ᝢWRr4uiK(v4|7s8F8к6IFUYicE%Tr[ٔDJ. 0^~3;P\HNZԎEK"uv Atλ]Z}@oJ*1w~H*ɰC`;ӟ01f. hp'!ЊF]UZK*‚9&&!HTI& Xg֧bcb1SLԴaΤ?Hm1ƈ";A}cg jG"_?g?6׌UQ?b dC̿>]wLIPOX.;"Tn6O: LN cJ[)uZ?7i|PIc:8TE[*gt@ukdJbVQD߆N5|C6b,%嬄UfD"5M M@`9l8Gf 3ǞX}=jb`*dzorj^2x4UYO* ͨѨ 5K~?IMX4NBheVd"etͬQ9ͺTR2sIf'ou\ ^n]mv/wvvumִwxeJ,*\%tA >$@d+L ˞!Xghd LMlz5,j詅 {3J­DUl.5 R#ʰX82N֌'Xbq~Dn?udcgzgUs|5՗;dVPM*Wj eFtɆ|,!#J7`-T6cP(f4$1(ٍ-;۲5CaRɬ~ Y!8eEKcTSVduEy)J+48x&\&""i΀{R]ʴ]3@:ߗ@<~*e嵾D%]rʍwH|-]L\P\^":Io⇃#0}VUgi\sZf%E78.%..qq/7j:",j_ޛfӠ6%E }b! ySC-q9l(85čga!kuu[ZϑG5~ :ޥQH[_)qWs[>z3K6 Q'WbN.;Lw=nw muUk}:u޶럼CJt{q^tz@i;ԖfbC|CVApsu16T w@ǒv_>{Gw&Žino;do}\[+a\td^esuܖYJP(ls5\j|S,,i*}{IwΤj$N \sqly Ph溅?I=Kj H5AҽSVBT"Ak VR Gq.e!9SM%9{N"i-aub| /d^R\ Hmt&雸<Ovܗm-<*P~*+ܡ] }0( LSlL|S>܂^u@4l:tϗnW*5x|3uriJ$2or'4@}N0ovo/ƾ3sM.'!J4,c&8jؚ}I!'sD#ڨ'hilHT9HH>r`sUXvY!^?ea9BLcj!u~9oubPxFۯ˧;O,6P*)Pju֫RFcZ9][$w1{>BttX6M3)0sO@gldWJ6 |t:z3m:RRF0UmrqbϒALIRJ3OJoxc*U`:l8GԕEW;Wx%PAWZb+xj帣7ciɉ-gɴܱz&b='T+(1kSS*ˉgk} RGR"4FFΑ S9lM!?;bacm+8B7]-+/`iBw!ԭVbٛPItn}jFcn+FRX%5/dh^<ٯֆ1^y>+9Y$K(g$dž"NS+%6g\Uh"(%,>6#d+{߾[2;з m=Ar+-=1LL硍7Gߨo@|0䜔)d*EBa9\tshM) Ir fM*P-`e o%ۚ3Ʈ.l%g8$盫MQs14FcMl+(44+&Æs$I_g,O:Tا3ϛAx뛝65CȧgEͳUؿ._9ڐR +0X+lmB[Re)-p$Sglb?1&Hպ58Ŗ+D^CMRjŌc6$z~ܥ.xעOeiNq'W,%fswƿXLNZfr֚)bcΕ&LLr7S[Xmj|CfO/g|\Ow+m'y4=k;~ˮT/Im!*-O⠏(C~'⭔ru7X9Ž%rSʩA'C"fW&UR&lR &$Gd2Ž$oܜgnAO  y- ÞIν}JM>3I$KM߰&!pW\>_>bAs=qI߷[RP!D49nmbtUMC0T_t>喭|˃V^TB o(XC ن2!4L(v, ϑT͹󼣊JvnL;{ޟlVˈ2+I;.zRPޓGmJQud,-,w Nyl쳅&R^ ڊ(hOl!J\8͈ҫ^ɞ1B"WvUY U)]*A*880 Y|]´+Eȫ{[mq2/sSw穟9`H8O}GKdr%5.r4Q$~6֜`=,gL๺=% =Ky\qx& & )o!4gJ Z/y[yDSva0Tʑ{}HUL)13Pr ŁDU6F cJ28p_;b?F @hfwPFFO z wƣ(k8h"(+K\: ao2AKjP42mұv5xtW_W/n]'{o˺x0+5Ƴgf|HgD AýkՁ>o|w56 yHko%vʞ!b!W*ؘmYi|Iՠ.l=Wu8G~Ӈ'THP7ϵeRֻАYP!Os^R;LΘ aqW`0iEuO>QR{Ej={҃{|TVJIOƌ`e9O8 I6r ,'QuwNy`ࣱ&' Q.cIW$[`ue,S;3<*o|[?pqOc1~ȍib;[˱qqёrROН/s5/5 1W&DH,,@XYPc'+a/IOZp-qOss X:҅|xO6M;PkC\wGOip7Q ݧA)9M |JW/D1;G'!N'M]z Onc<]ֆݒ 1Km5Ide5RkMN93u1nWLlNSL Ԛ,J@zcxGĤȗFPP-l{i-5yO ahi'?;Eʇob ]T^ļnot7.rs! e@Fƥ# ~YG}#{"qȜ#IB: );jfy7xa6ҺVI96B׷ X@SsM3"9II,3@z<.vAឧqvؓuaTt<`&3K 1FxdndbMKg<|ڟkif"Am aJ,2l9Ҍ j|L$S!9ǭQ~"5;Gab%9a3K wBO6!:h$ɽJ;?^ {?+;xޅeWʅ_gghT<?]~jrqK}Ege4s68H~N{yxu3H.5V03.dapǑ,['ܐ#Pљ$ʘ8b'ZIIZM&LD'idkŒ.0K*:0mҽpK=a8mX,(>ЗPs6r5輻OrggӋ𹊜H^꜄EM.͵rٵ hHoeTa֧ު앟L?vU}FpP}hxil}+Di}opGi#0%#7xKA66j,i4`ye-p@琳 pNnjuc_zY) #.CI\AS~3DT|}3? '$+HKRߕS[.!?aXb8 ]^_vˇAEߣꁮ*f]g/MI~|ߕo޽H7eB9uNwM/;4mj[4- lѴMzi.ohow۵pk 1C9,yd>ϻ2X=wj$BŃ!^*^JzVV,k!t)Jr8F%&&)z'wR68<՟rC1hrc)Q*.PhUR"i5UAȡNR5f{[=o"߾S>ig'ͻr]_ŝw%dA0r25fὦB+dnZ#6qH ]1 Q`灄蓪D>{??5\@Cf=ߋ^bIb1ָļ˧U.Tե[g!ғJ^_7:Y2Y+ZTUN&+'l7V.eSR9]")U{T^XWYFJFʐ I -ҒxIcWDs֓R6 1K֘)fa۔.} {hu2GwV:^i/Tê18[*zT{Ь~d_GEԐBaR2&D- r}01JQD^ pIzA##(a"\ 6$FҿD OD d  }Fجa. ER䤱LZ! +"96{K(fTohLʐyс&qML,()E* JȩNڹ18a/bl #6f=# F\<1d,ڗDP,gd5!dQ=f%) 'YQu3>` Ʉ/QJÆ18[\h=⤚O6u6f%"4̋Ş2)DJeW82*S:DAh\'P=/>/sn:Cly~N_8O? m3gUe]UIr @4d!>nAQ#WY$Qd  px-YkWkT*;# \IHh9öK‚R5JB̈́dCtELg*"Z~F㫤+J݊i~ZV^"tFA0܏ӝLEC>(+ Fv? #)eΟ?_!Zf+L*p ]NWҬ.tu2[9=w`t\%KW58tJ +]zI\a*p ] NWb=]B"EgU*hm;]J{zt%7Vu3tU ]ꪠ++K pپ)*p5D *he JP=]FRZzRJWd 6pmo+OF@c>ōyyuehYpX;6/E^ä֢q%IءGߗ?09AI\%l* ,A4jT=FyiNR4ֈ*%*UF\)iOBwiS˕p;sR l-(e+J2^Wv mbAzz=tECtU["ˮUA+[d[P5^O~pYO t? Zl+(U5Fi!"wg3+tUЂj;]DWvˡ' R =fNp5;wZ~P+]ٞz NU+BY骠WHWfNQU *h ʶUt,t%i2v0U++tU"o;]t Pk:DWtH]BWm+Bi5ҕ|]*n15` ř0'\*R}KJe˫/huieHOB'],t ەD N\CRֶ3Rgj_a!x$$[\ݺRxiHrIII+_1RDR5Z^Y;@< tL .NQ;(1*QW\Ǣ ]]w"Q]Y5?"dhU!;HCWWDSWP]k ]ѨB=tUuQ|yޏr8&tUǣ "j8BJz\])+v¬f>=G<\Q~gRy`ׇ*.Sϭ5N"G V*ԺWWD%؍1>J;T]ѨBOy&*TN]Gu%AqvL`!XQW\E]jt3t3.Օ+I2LЪ0htY8^؛MRg/%YǧZٙzŹ[fܚ єWH uUE/hFϛ&K?>ϟ?N=g~R~8œF+J h\eAhnRYw5IOJj/cbӅH .UdsϣT3/>ec2kTǴH]ẗw-l,Ft/0] v~{CEC\(j0qXHȖWxk>؁ӵ{8e3.m旺.\Ż^jtvz$1Z.r4LIhdfA\ #!i2aAg+E%N?IG V OGV'",}r6gB 1JBT*f!$Y&&sJ3,v(2̔/Gm`NJ,շŗ4M)G߰9zӫᐰ݃>ɧ̈ 9ky^,t09"wlEŖkM]F^}_cbVyS F?IubD|Jƃs37MbO?˯8|XM+xo55`( t =Bـ ԺU !Ts{:=x! y9SJ$T92߲Fg9F+!C?yQc:5FAY6p1}՝< /"wɂ RRjWq*N/''g*yJ[GW?q6)Y\~]Ni?}̝-?t#Y>z5VaW4>uP=LI}r;^Ins9`I'S:Wr#EZ2[D?] k߈YoEl&30y;k>.kػ'^ZFmxi0Zv wS5hARrȧB&Tr+.)KZhRp䓥6x&9rv>pz}W  v.̞\⭻0aSI%hM-80>#Gnxe+ i0:n7?_{A[<+! s2Y^| ƙ BdQ,wFsmLd^c6Jo nsj,:/0!~mү7ʢVV1gbi J ߣI.!7Z[-@fQf4h A/;*eP S E1p'E$GrjA'&$ &8 uYaA%wAElU1p41%e1DsNp%k˕JKa$w^qEˊ5rX )t`oNWFozBH>^<-ynum)Ҿ"z[5 lzM57x2+Yd`e8YJ8F%uYV.S<(xz:ճVtBBTfQ1~6& zAtMA UyZ<8(|7-R{G/cbLXkH>|QqRޚ63)0QА;=I}{R[ZXC52-h80QEm{v4T3l /ċ7WA/(~Wde+B{x:nh%XHGi.:5ޖGV>$GId_]ixt')~MWԔW,00bƋW&1ˁD(j Rkbu/z=3BPSI~*)4{/Ϊx2*ަ7 L*  G`]ųy]0G m_/??lx{JtxRA+AFi9pG@0#k#JgXDt"ZbNw889}M5[Zka(τcܧMYya@9-JA&3\R'zМ|g#ς'W66 IvJW80jӫjIS Vti'[zI/מ,pwI]RzYj>,0J{A>޿2?ƣh"I ]qzLq NHY$w7N%Gsn!}Z8uJ;Q1VvKIOqKSl U3J΢J\GyN}T;CK@pb݉mۍ$_V@!pr LqE92f {PJGҦd[]+`6hZ8d6`Ec a#\+>\5r^+(:Hݫ^2od_(_YNJl|b>!q4q{՗WOW=:ꊊQ\T3QIMlhpĤ !'Mܪ$%,RN֫Nr[LlI;9;" Q 'Yc^IQ1pHX(xB 'FFa,Y ',41Lt0Ȁhy'[Rj |mX Kۈue3Dyh @v x (121IE,&u"Ц\ozz-˺iRE O `\f:iafX9YtP5CbIoR@Xݝ}_LFJB1,a$*Y;ъ8xb6"089~ocJe(h)k6:gkVDks%o B@e'mmWcJMP594&k 2ڔc N2F`HGњ$5[P3YЉG{cU]t!JhY:{\ɔz\"d#ls` $Jt< OLIzɃL㪩tBRr+@T i,fBx )$ExH ;3~=0t!)k0QGZ !frHi 9MlsdJ& Y]J%8J+ QjZ99I*KȕvQMB%u<'ǒjۧk-~ A:I9IV i Crb$,ܡ <pZ`6`~gr, Jx҃jB!}sLD%!CA%K2=b6{N ltXT,W5cTa<,\Ƹ IeIDʹ$KDW,I c42,b $ah:; c,޶[Z[߆Šn9pOˁB+K弚uN0S{?Lam1ӑ %jNzA٦{C鍆/!Wtbh7U tP*G]ugtG3.-(f%ßjQ)hH7ҿ5ņ7 o~?@G"UJJV2:JM'0MKbeQe@JE檠"D%Ex6n}{e9=}TٺXi}>-H5(VTJ1]_hhNG\LFЫހj=k"U!H2X9:s m|34.,3Zvd%瑑 9d,v(A@aWb&>< y*kklqI o6![N*E<[rZ1Yt5vLZw?Iy{65W8d>)χ4 Dv;~!]"RQU դEG4T9U''Hz HZmFٷx r'˕!aGG Auh]\p[.EHtK=Az^ z78M\17'ik=G>\U^7ue8 -*L>=},'YF:^K.Ht3h,fZ2FSA=A*ҘWTSmDa2Vt 8ˣQV;(uL^[cTJyQL̵2rYfeNz!Zqoq_Zo}z״ o'ek }u fؖQz{z`=B8i-9F֖ d.\f2Td"Lwem-#<:L2/qn^ 1EjHۯOpbiH"-J __]ap9@ͼ̻i*9ԚPCo˴k;h4=2ͶYZdiYQL؊!!qñi9t"@ 6 㘀`?˱T&Q Co}gd7FoHI}${x$2JI~,VJJe{N(tɺZl[gth$ZQ{δK- ZHu),/~6#'E cB5}V\Ŷ8ݒni$Due}r˯ Cz,M0hycM0 Ǎ:OF V'K&6ls`sJb"Pju yp(AXYDJFъ6i )r ,,U&)`۔G.| {b:s#w;+^Vg5qd> 3t0ώ~j}a1`R"&)ȠQ.%B=.%#9D-0Tf_ftzCa}Z3E?DKK,WJDGju!vFa옴 p}&\~BNH^ "ep'PBkM\zrfNlc6x4ϖH{7~ic|vhrvܪ]Ѧ]Z,ˉ 132Zr* 5e%:#=<_+xaޱ?Lnxme YֵJ96tl,@K&-,+`B] i:K㴸Fݨ 7/ևLzm^b<2r% FiOK̍({wHr$ yOș,K-Yl*0AS̹ɏTw!XIOyC3KBl|O$w<$KDkP͎ErNJkƻ2Wg.?=ZYJ6QR┆#?]jvՕ@bjUICrZy5?8қK?N)5qW@pN(x a(cprNδl7 ?r28I#-7%4rX.+RM(P q-nB6 I JyϽջsbK*Т?N?/K+D& ӫW[L;1GRofN4Buצ+'KDU niYz(^\kkɯW4̾8qp</V뱽'߮}74փ;J cZ2bkK(7tڌdjk37'[Z1'~,Xvxe5г6Iζ lwNnkuk_zj%MM=, &yP)?1RJ{֊Vw߮>ǿg .թK.k]B8aX2qWM4L ,ϷR f.8/ɟHI_߿߿7?{PEnN~io~mUVfvvyKۮmn 1@zd}}I{9#/v{c$ HRi2 ZLAsE%xIo((#j eKa& #=a\~|=DÁ_hWPމʘ`4aFJ^|Ow]C`! 40TiP(^@oS"1%"g\Yfo]Zo?I&eozԖ$7jʠҵ{9nYX6ط <c)RԢ)*GBSTi%YDëcz [L,ŧ{F7Cџ;7؈ %BcۍrԀ>Q;pg[0mhe4xtx9/keֹKv8 kacD;WytCCP'>4ڳ`-SFM%1hqVjɽ(iΓ>[@fQ񔓭j\.ɻZ =#vigǻj݌MaM>dŠjUI >:9wN) 9M(&i\dt لeRr-gFblyȱy?;,r.%)rh tB.~0C.6vJwT)-JﮞsL'v_!sALX{*.],RY݌%{w;xJ%ne5|YrȭagȵU-FhZM ia4-Ѵ0W%FhZM ia4-Ѵ0bhfѶ0ka4-Ѵ0FhZM ia4-Ѵ0Flc >_Ӊ;_jOgEe>1'e;6x\oؽa B\ĝkOy%2ɠJC22/\H%DuQ)& hS.1,AeVHD3ef+cj<O tOvRowbf7m3>]enOz 3:SKaSZ+V\dZ!F "dbX͒rn`ސvPdpe. t&Yq6}6LNk I8ZT90WyW qyCѮw!͘b%㊆, YZ 8]aX>0 b)kMu(S@|$$ )]6|W~_.WۼwLwYV3 Vš4?y ,m =X .h_%oS.Ь15 TS%-(%b ^q`VapDT0Y17k#>J= e6\FsYiY0ːCg)Rk2/Ye&7"s֔q[|^r+iݡɹ3F,,A7BOOnYRp:Vz G** B樊(^YJ&sc[%`v:DU`-R1"M9Wc4XKWv,\~17L[k_lCw`.Abș6{OR암Zو rRI3\+'FxP]^Bo^Ded @T)2ɚ0,iGRc(?7 YgN_),p[]RzA%]rhN?n#}:MXdF&׉)U6*H]Nf^cPZ02Qh%,`%:4SGLTV4χcY@g6R&?{HR*QIF/{Grpȇ qF| bQMZ-i.y::}Iqղ).LL=~U]q j0j.On kW/>r!T9 1zέ1ΠE!DlKRX.0J}?-> BXoC-2+zE=8v /PMW}y ٱ!xeU[a+X+NGI,XYm *ٌjFԄѮ۩#βVY~WfϞ}eWf"#nݹ3,7o|fVMlG_͙[㈎^p>:*d&XX~nʝq-2  )tEP *C@C&2F}ɕ<%S쇰2GÍ<уFcpѶ<[)b}Ib%o/xn1ӲP'-I.4gK9Ϣ /ǎ&ߖsݻQm~n{qvy%!Uo5b}"17QᶛQv+/Hyڹ&azK0ԃtT/CTf?RG7\W糋҈]i3/[ûe2Y;w?ɖhP6<(w;?[DGS%|'\@WŖ5 u\ :2L 6\zl].Q'Am g LתT[Bn[Qet n7 noI`|3֯˿,Bf?(tJwg2i0/p—ï%EQ6U 4CK{%ݛwiW:ޮ@o ۍwߧӋWp؎H념h쎴LU{!-|*i ?wT+ g7@*K >1bL{~Jjş>P%}TI[񾆴a[ A$VOiϹ6 qA[93ՕbI;Xx&#}k2Aa0*H@ ]%+n GE@!Ac%O-5١ŢO`t:ӃϳˋEĖZ^owER֮s3-R׬}mJ`L+ж?A ub.")q)^'UQ$T&BrnroMvtՌRSR-}jÙ29,5$S""mAzknV׭g u E.22FhsxG< q:=<=_qMPu"eT cTP+\gUЅCmΆOwly ً(jjz$*U|\UFы5v:.hnܱt Ϡ6/{]_^[rٗ+0U`1P$ MUm0ic_ǡ6NxXiWbZLu\v"Q99l/VW ,: $I%rIV D m:BM;e0U$X4V~O ͤ?6or4#2he V;9J(OG< j7yjIǬȭaVH|/r˵}9qX,bDbv' Rأ }zYpuvMDRI]_y%J5]z 'Ӻ8nx31՗2]~֘_?yQnu{|e=ܧ/yªGeQ?@n6_.+Vc󦊫ۋz ɗ/}o_ߟob~96ew7࣢w(:U4;/xEYmߦY,7raYC_LK /Z'}WF~~qm~tA9bs$f*~ h,],r&C#Gܓ \?#$> zC?/Ul.^UδD9%cMg5zc4Os.GG|P龏 Z5|m;n xSۓ?7qbn6UW/` ťՊ~qvC.!ؙ׎~k6]QlP:D[O6fT!m?ޡŘu-ryh`ܹ M])P X YJ%-\K6">v6snIē?iGofxcy.<G|Λ'd=|wJok8Un9Ծ=7B-TehE5;_Ic2xOr6 Iva~kL/xžwv7s7Z)-dp lwїnVw[ZXGp{W\`U€'Wz]\Rf_[kcn)Udioh \uk~V`WOrbg}`w?+/-,-.+%B31ϐ?т<@l?#Kn+gwEŸi L`ci^0aZȒ$ioVw+ŤQ:}, تeϲύn _+0] /> ̐Ê/skPkOC |8m5n;=o+,]XڲdbvyZ_M2X ;mSZ6d28e@Sg+{ۭ>sFA % &1/i^ڟ٪tZHt9  ʔ ft&E3?|w#J@~9];ro.Uu+ux)b| ;ӳ Κ`;"](xJJސd]NӧFmv,[yO]QuJeCnPT0.$V^0G{48|@FJqǶp9# yn=:Vb`WڤP#o>GcH|zSBINlY:v}{&i^Q${de}X9bOjyoy,WG;7uV*fK[rJ!7&Qa-"Ρ,ߗXD/$ф:fcT,XEBbJr1 ,S|M52huuc9֕r<9uR %|:ZK&jB5JpMVtl)$IkѴބ`u΄/&K"c]r\0YI`RRcU1 2pX-|& ftc#+sUlCh[+ 5g]MrQYW=<ǁYeʺ88 ҩh[4u 7Qw(Sj* ȿMo0YSh*1]Evu]k,@` #T{a<>_e坏߯g\?Wj ^Ϩ ))qȶG5R\f9i*kjjlks"8*BIU3fmEI~APT|NTermE6Yrfe53Dw <}X 7 VƝGUEq>e[}%J}}6FxBB. uƁ^̚ @n@\yr 뗅8#%Ӥ!E)Hv1kI%$bDF Tn8YAz>^@|J۝2;iAf=ZM48v- Н)@>)YRC9v㦍E!ά 2S"kU(M6ʀVľ_"#b+T|?5" ?+ >馢BjYbBu:C1lͥVxLk}t֖Esirِ4 3˱o Kᦥ/U{) pM jot~ק 6d_筠4Q S}aHJ-ChӴNkg/oat>]-͡h2m^`IѸ 84qu&d@3588°h ;aZKEL}5P}057)".aw2hU<X 2m@ ZxXB¬e#,)H-$9EH􀕑Z&pJ7a"Y2'YsOBx`2q˾ %&.I&-oPA.2 uY}6VR`eSFtLf ؤP!#I );(J-0 QX2y=ˏp]}uCqR*e`j?[F^t||srTyic%G(6/IVߑAZZ"%UZUR@l{ޮ?I(Ο՟YtP78ρ0>NRuk?Wu+y(P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %VQ`"d̲?J kG k-=z%u3@OQ `Bj(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(*eI %ޣ;`я^ +E %ST~)P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %wH_-ÜHL 1V,4/`+vi%Yn ̌nEXbU1gL 3r&Pz32r2\^L&Pg" Uf83r&Pʙ@9(gL 3r&Pʙ@9(gL 3r&Pʙ@9(gL 3r&Pʙ@9(gL 3r&Pʙ@9(gL 3r&Pʙ@9(g}=@wCA@ 2Qa=~Pۅ&Fr$09~IK j% %.T|PsWtkerD$+ Wo\7P\)g? kS~ƽ7~W݌z㑯׫3z_k_׏ДL{ f%sL^]MFMD9E5*Sz;Xo ~ V$ ި y0&L|AfUcq#QIe_ЈcMGZ u;7*T TdztXkvmp0^nz#;|h2&L'We'[c#w&.^!>o6ϻyId_ެf;Jpr[oPpe/ /r 7+lWi2DY,Q` e 6_:*;۴3j+C s+4lyak@[`B?]_Պp`Zon~MbvQ.-SL8Jvw(F]^; ‹iZٳёX~ @o &R + EmI ZrV8'r3HzHRen? ;ZWLSB[u/ĉȸтe /`HhnWȆ>]}-GD*\mGQPGqEK;jq<vHQB+j(1%TJ-#2 $q F4B;*:ڛrDR`(c<ۻHzh Geh@,73+F1`8#F#r«T0F93zxW1N.Wsqݏ5C^ru\+A|J[ơtaԘÎrVKQ pk-=y#u}֐T2!..uІ 8"I(iV{éH8Tܛz٘\3ָdƅ?-L{n[YqR^ hߪT+e?a딜Oݼޕ)=u8r B&J8{v6A ~O>T.C 9d9TqڙpcΪ)0_%P0+! .*l% ْ\ mͳ% e]zcqG?/ВG:|M-D7efzVx#QJ:s].ڿ;4u&.6 @O\u09ڕ15'Z~i|_?x3^?8_fw!<ߘ2WR5H?ͦso9P~jI֞!nں!p$ZYgi}(G|L~ZLtsp3ǨWvd[c{;,HSĥgUb/>>ýQ(#Fk?pKAT7KE7vPN|t3N*8W)c$U/҃9Mn$MT.UTNl`|O $}wߥ߾y~ݛ޾߽ ?Lp6j"&Ov=]u -е]z>[~5i*_E.jK9ҎꃗpY~wb}7Fj#QH\HςҚnʉΆtN0JǸgҵFzjgB|K2v/gX(0=^[,ߙ 3a4wiM* 6#Ow&\bERsI-q.]` bL3*LgokP} ĆeX?`/|CNv14oM"lv%B=)(t rNa;r709B!RL:-,% ?} 8!AÅFBHЂE Pc Jk0 1[f9Dl/X2@΃ "ا|KIA;3NGd]aH03r$i]|IlfTo$}k V3*Z"$PI!0*p2):xK(E'?杣/|4_ n؍-67 'pW}f7K^q?j9;G<=. 򎣗 F,—U}j}6L.ǥЅED{)WVD{`|v v?SvXg碥{L-XgF 2B0)O6JzGL:eуE9DŽRn&x9=a4Q0A[R.k%!RHDc&mg7iAB 1n92"ܧ681OOo.[me`2 >EL1NnfCxo)oPe# ?N&NT$Ԑ(R EaJ 1_NO uRq:(ZGz#:,.(KEAISDD)no;dJ'u!D!e!x0k5f,`ZFL&Z iuvF΁A;E >IWxrC:] ^R8GVsw2rœVqLr& 0.E&) Q3' q BM" T),VJ\w)vE.'zK?Ai&8nT"=;##=QIcL'ꄌTMxrZZMΟi)j-Ǹw?Օ'&ǂXga}l=x-6#DT,IӾq*t*h-J| N9ťoW12#)3AHu-;#gdt& ;#e!E£ӹ,  ؊El2}PAFĈxcf=Edk*$vB{&1 =P42dO @%IƬ`^ Tj.D+rKl;*y*R38Vj6R;#]OpJ6bEtA!(f`XpN4`hNaBg3bV D$+Xv%r`TiL3rڨ_&x*31K!1ϝ a`vmS:E%{"-ص#Khm8 .)DXҠ\JҥK9 *R[U1\KcH\rqmA"$2s P T80"F1HN !Cerd,箸Xy;G4?[&loTMM#./8Ա яȄ, in`!KlXB9- -AƒE ,9UN(~*'?q֑u'1 % Lci.94xm8yC :gtUr[ R4ߑZsE9G]*E},ʼnEX(mW< GN,s0 t> c,tSDEw1ÜWO4XDSkD!H6$)"0!<#%s 3q#/[@*~ss.퇭 OkYԒojk Đ4Fy3=@w׍~,nbMyhyLc5~ 3iԁ!|`!+_I8_%%_m)QN|HH"aW{>17R`(uCcdMRI 45]ԝFZ)~8Ӗb6yh-w=iE-]-O|qP=- #]7KRR SOaZx} XGh?iJhZɕu{c|>]t8~ȲQgmi./C|jZ?t;\?}"sEdP1^FAP2Zm?oYO{W_Dܩ~b r(JIBKegS^jJH錇> 6IVe\謏PFQ^PT"دEço^SՖ9OK[`j/A~"Gf%H18g`U -ͮ͢n dldfA.udM,ME -}m~m~]wv<Do59zIKHI!H^F0rP쪚L{RI*{edery"5VZ'̭KzvujlPth?Vѵ$z=dŞI{1ɗQ7tui:[|~M/FwW~h/\Ѣ1}g~1=-1=`GdE(؛[UW#WY/I=N>^.7t~WNӴnz"""@@,Z23h*tB]V^rLĜԴ%A' tjkјL1b.˖貌@ PZZhf_Bdp+vI94L'B(hHw9%<#D$ 5Xd}E~[^tCV94^<`h~ >> jv쟳/'܁!d>1'em\J: %zC6,P&pdqcQxAdHރJaw#"*eƔ 5\f2pFH`"ACf1[cR Q;Fɯ_{,nAv玈ۯݾlC6wZ!6ba1fX:;=Qg;*-(sJkJ*,_a"dbX͒({>`AfoH;(Bu@ggO @k%GBvW%N k+_C{3mS,5+,KBZ+.fY:[/9j(7iބ5FAn aOz$dIڈ.8 ]xatqn8~(n@_Ǫ/ᅩb! e;oU;*d̑/xi[orV69AJ5՞TY5 TdNMhR"f=WFXf(`Q*zK6YwkgsM D&BX5py 4sݠɹ1\*ɠ9yGaY/7I˳U{AF|B樴 ^YJ&ʃP1 VIP6#:}RZu q:kwfď?Ck}9P,%J 9Kk#7h^I%7"D.M'AXrb{WX^b+h(y6 HdXyT)2 &$ÇyQ(g=TU1,Tu,yJ 8`ւ+x* ڃC)WG?h5}gMEGZ>ɇT}VwЈoS}=EtpK}^>ך>oœY`S&lTf^cPZ02(fwRre ⩣x&8g<CQDrdk$R[Y!*(dң8yh]8-|778FRXs+,h7ZkR C@ ʉ*zFnlcYZo'̳6зzZMG PCD.PT۲ eR %^twYiz=jܒ^JɓY z4d:*Hh(T$c5^K8#5*}pҥ77G]]l>5}ccޣ'313b{& W0WF?|wv6{M pof!gٸ_Y3Aݨ71q3Xbf 61s:?cs~RtRqʧr9s#Il%kJ$N3~OBϋTC%yRnD. Af~ e*Wߗp};Lԛ"{K9>X_7*o?B§4648;.`Ko!WC[պ]z F ]z_`^iYweY_/@%/+n9"hr*6x'!,*v+]h& b}i]b OlX]z٫)a_Of^ {x={=nIô!ɪ J汍jCLbHlJXAA@VǶO1vV\:-l2츝u5Qc>s4V:qo8ju@4,A]P6B-oЅJ7h pU!⡨+ր*T 34p{uY=R2hXig'ǣoW=gY R31OF?t2_ 48wd8~4 -RD0 .w5MT `~jZr,7K;}i^޼9:<^UZ (twl#$WC\nN}W`t%!., %  ),FL @# ϵ|>xa`Wx1~ly-Gv6mo`B \;2 ߤ!u|\/'[ c ;Ш;lxF h;L)8 b9p3Y;s"͐Kip).pġ6F9]h-%1e$UI mj9uBŘRQ0g ɐuLcDDuGjj~姓+*z볲ZCσ!|JՐtz$YJ*u7B Χrq[b:--2t5ft]bh,zɿO,di$%K{\dƚleRNl*) fԌ)+hEw5)H}F .ȐRT@w: hf옽ej^^YcmwzSQc (X9LYkbTtJ҂F1 >ˀy:jVx`Y+{ =Fv!*J-U`NHPFuSF Ov͡6ӓ=STף)'c\Mo$@k6/Usg&i"R7l~~I^W~<$:$J " !YUФYv/1j_6RI jmcm*]O6ZMdKQ2 ivR=c3qbff h >6|J:oH_q?7ūlßV2[R@`B0) *$)5 qILvgʢ1T jUݦ4ܛ F%bR:)+\VΔd[̶q.&i bnzmlqCMQ%QK,bPLXP DXk]W%Hm`!sdQtdb_3Z6JTg?l&#6ӏ#UxDb;(5:T=QTE|cQ%0{ݖp0gd; )DHf+--*k)8;8Y0CZl%E//~q+6*zxB }ɯ}Č?5sI.SN)]J)w$,V 4  8u bJr5#cB:逤aIƘp£ o×<]f#M/fݛn\y+Vf,ݥkgvvbg,5ݏ6J Er]#:Ul3NNru@zrA000at{SwirJ@)dRCWJ} UCTrV5)]ZF{ CH`3J IySqXq2i% }iwfTU٘sjXq:7M SCaj0ugԻ-2WP pVq}-UZJh<r:al*yr'#TΦ؂Py8$/ܪsEM&I0欽_x+84.‹"Փvzu2iaoF[4Cx3Z٢ˆm>`x}_i?$Z UiBR\j۔}Ur2M0NqK7o_SYW/,ܫ1Ccq*8`{݃Z$Sk%Ț;o_+ghx{a穾v U/ `ݐ_-V^hJDsǓe3f OtW*ENc.%PT8H}}Mp ghl~:¦`%$C4d!B栣Z"e)69i} %R rLkL rمl!fvc 3fj&Ch֖y~9-yc*)Ƌ[ZkZ]OTt%^ڍuf Iq.l$gbJjVC2XIl(gM ĖJqZNW6%,xMɕʸ,Rf\ P*WrԌ>jֿZrm!(^R1D`U= CB,9!MXXa͙1!Kllgz Y)*_~[u51SFK!l92AP`E`ITu `-MoEz?I]ç8gFJ9"Y]uE%E#W(Gx:cjū!po]7Lxe1Stb1 .:FWy508&c/'N݂'!NTZ I!Y>X] 0" h- (rv+",4wuǗ_j"f|;GTAFJ%ऐ)1` 6C%IR󸑇]xf y3U0vP+iȾѸqzV'Ǜ 8eż AlOJLHM^(ل&eJ,^QF,7J5ܡ AEo']9hT%2h ӥ ÷ta)q &j[٘xɐ`2Q%9D5Z9X6Jt>_O9sCQ<(hrYzy}1DHh(m" a HTbVx7]d26(͓u/J`1qš^9Fh>APTNhlp'.8H}wRlEݹ7ºlZm. l m"ꋌ=L 6Dvd/uijGL^8ɧWTH@$8Rޣ :)7?ƻR+g{Rl8d ەn*gϓ.'PY9XLT3II21Y !'SgdxsS揪HO&}2q2g||P^hx]]&4 Ϯ̸Mgc.ܼY?u𓳛 G0iBjq9K~TCt؟&QKp0yp*ӖfkFp=kL?{E]n=b-߃9qau,twhFwz.#Mk-~dבksdv.׭~͖OW5[,*ʝHk]J]kl s.燜%\R >1rnb?ߍ'pۿoqw~3Fr$bbcxCm7<  j0e,՚ j[mfolg4n 턷1)qE7M_;LD~ g<7sK7m`6ʙR "JzmjMnt@?[M˂A be KtX }i4%upa}QX ǔJ å7*m;8#p&FRȋDe >⏝3+l-.`iڅiQ^'-ׇ>w})Yׇg{bS pHU|u3.WBDjι.:o.(`H%~yģg,1[ϹTWn\;Ǎ4ᣱ&A ?omTDɨh%vUe?Ygv-KsM45_B5)u2EէKTl7O#Dh̥ӪZ6Nr`i/ݗ׏Gwsm_ tWlKP-7biisK=)'TOgsK3֯|yzjN?ߝL+&Nnm©y$n+wL\Ƿ#؍\Mݨ5{ܙ9vLP.HY!l#ș۝'S8?[G,|Mmm >$ϧ.w_-jqWm MwVM;?%o/z f 5{Z4;`̩ݘ~TOr^ER] ր^|-ޢ%y ue%SuԕT ųJ[Vd껅X]7ncqf"\s߿ 'Ӧg~G֔Ty*Q-^MNzfaj?σ_()24ppa>Ut˸2/>7E*>wrBqT g5:ͷ@G3iCƭRUa&O߿*rE,~ybL젋Z)+-Fr5=sL-n g*)40B%?=+űLaJMQWoP]Mn1d\Z}, E2pIuOËJsMnh6; oK@*ZukF@ͨ9{X쌜vsC y Ay8y+n}y8Й6Fv}*@ynr?6Hm6IUs" Da4K'C0@wC%WNli M=|sR5kxq4I\XoZب"WW#6Zw1Frl!2IjPvPx4ZYM p-c]yO M<+#0(k4&k@אqZ'yE4MBJfqѓ5u[B £Z0s *VS4=Deí_AcWvyƌmZZrt>a0T82SQ" 2Km Ka(d [vwH^ZXa\6/E4 2RFD@pX&<5OH\iE_3U5m#HS!I)Y%$$K%JTV#9 sZ9-ԭ ;yj %ID??\,%3qMlZ+g?9TsC_O\fZSۺ6'&Dɏ̓N.[4èP|>]mo[+B/@X(nvq?,!5+IXv,Yi[IOQŷ33|r83~f1*\h2ryџ`[Oi|֒W[[ζ5[Yn̦dy`}>h[;;ѽ9|T_#<[ V: ,*Y+| r i9cΤ`HOٰ2.]=qayRbXZ^E;ŋtl ; z1gd9'`QXY#Sٺ`STIOtw34 b]\8>iq>\!';K?FַLw w-B?){(ou?5}x |/x)'w~ڽ}WjƓ*ya}ܧ vV pWf3#q[[؀񊒺wHp H-OxbEgnь=W_<Ýcv u:H'H!zGJ{#c$IBm 2FR& 1(Ƥ/͜RAL2eLM uBYwLjl A`gGtyWo>5Û7ӗ&~+d^@͏1 bԺ}˳S~|Oizi2Vü?gSd_Tx]="a2OW q-ScC~4"*0aԼP'q:]IFAB(r :g41ijw$jhZ=%MOF8*!L]NZ4:6R-coafdh hk 1lGulo9Y_rU3zmҩ챆KT!`D"6o2vq-:IɡvQ7j]ɍ \w|ƈ" {FE&ELM3 ]NZd 5ǰGÜ[Iǡ44? [C"k^ᚏ3vu\Qގ{8 ޏl"/tYO'MGr&j UH] :Qw((Da|[G޿IAX mNF0gdR3'JLQ:$+saUVm+tL|&:,WU9ώ)CIOwx8Ep>};ŏ{IBv(;GM)]tBP2e9ý^+ T"+@DgD,R@RΎ͘PK4I)DG_D'~Nv}.{^Wn;6 cBbKw\--Jﮞs2u{+\m_WϼG%wl:QcG򉽡縉b>[$ی&8֨wy_jweC`g9͟ܚ/: tLq:e>5/HPdS,<cit8%ÿNYV/?^úr~?:zt/X/YgQ׳vzW,HW_2WTSfrm?KCEoe?5rۆ7_-*fwEd0p[de(D:jC QGjvCl՝vٱڊyc<m<ͭnm}۞)Y5}MnrU0בW fDkNM668ɝӁZc Z?1&^JgתtNLidv>;ؐfڭgb~|ft9ROQ"ϫZɖ):JVH0~ "ʣMѹDcQ䂪$[7Ue@Sׯ_C.伏x۫|V% }v|k ?1\Lrk;$BQ'M)bX>si]=ٝ)il?{Nnw o??POIؤ tJXR]1[=ҹ>Jѝ)"w {MIYP"Fݔ1 ǽs/հs;mߙ8L@xJ>_( --YYP9T z@PV*_Y8YbCZf$ն$R)d6c5EN:ZlYwl6N>3k~5o|ӹ6W4>/@0˻+⺢}"\ĘŜ2,B`!fC{"K.1y"U[!\ԃ64xг[raeCJe)3N&DԹGѰ$2')k6>d%*Ws !"u  [.ep?/>gMYTc=k֝=׫_:}KPS^CW]J+@ !"(+^z|f'x!n0@KxPaYщb*|*_C*yE`wCYZ!{q@!#JCg PP /NIw0]ThyHlb  -r '/D^Wyr0M <_\Kf2q`͝q"|8JSĬڍ$ s]3SM)HйRT1[˼B(ݎrz/xRcS=ԃ TR 5m| :K𸁇]xeLzS5;$PhB_qr^Vpp/bSJ$+!ap6l[&Ex'[>oB9h),mtys;.L.)G~ P(@\R`t1F cR>{]ˑ.eZߪ.J2(r5x ]P.$6oHR{a!MkMܝ8YwH)tt>j>[ ôDd<5JdA[!(Q(KJ(6imSu P9sL HioM3S'L00 P:%Myc2hhJt9W*kEfŸ`e9zIؚG @[d}[ 6 wE!Uz#,fuOR]N /G"e+1b1$y ^:^W`t _\jrh+ %VHUpDN bA m MH cGcֱ"/[Eۆߎ͠a;ݳoxZH9ve_7 ZAv!iMإlCg!`#mv82r 95# YF&vM@*1(- TYIL*$M(.>ZgN!]V}5hlnىbAȻffp~.^^Y+:#$W;}m'O322sT6gj{ȑ_eo_\&N0al`hk"K^IN_[eG-2eNg&/V"Y|YRZ+c! r8s5!)c5/ KKY|RAtP Q\8VgMO&0Q5'U K1qnky39)Gj3ϞAlަhDeH\}r\"|Ŀ0-XDc p"TFT)#tgIɒ4MIs(R}+|t+Wʛ('r&&ᅧxυBs m* LD˔1YѼXnk"C,JޢMKz$y|SlM3aɄQTOyw7 GwwךMyw'i cnRr{X)Rމ.C-3R4Yc٬~K@*Zit%LJaP5"S ft-?ɽ}V-7bqhb'=h |b\Քs& ߎhj*1!"wVt TCJGv;\̏o0E}<-p)l>!%Tim`¡"32')JJ-SNFNu+;D "d]>v/$KFzN0EXl82OϽY/ekzdq؄w;-i}"[D֗Vcyޫi5ez`I$G0@*'ZQEPa%Q6̧`:tSxƓi젧z:y~1DHg6A0?yR*y?DA29萧d0ɪo18O~+hB4SІ4Jd[=AE7u$ݬ/72,>Oz^-sv[Q8d2tPw"E-]@|1,ėJx<'=sw E˨W4bW!LΡ 'C0@;;ڢSF*dh;*7SWTG#9FӜFl}@hu@h6ƓrBU>hU#$1bL&)0g~Gs׳ 'Ȣ7Aw2^0>2ӹ9.7Kr</ʩ0?xri6^KOT4UȫQ68l(j?u7qc[Ɵݰx-WLmWܡIyM=fz+.l ݢ?ߧUk{&e6qصHV6--IvMŚ-^˖rbrW0CIu_j㶂vm5o3]BsJRdž t`݀s_46}U߮lFѯ:DTeaJV5 jXk5jVNqO؃rӫ:gنIJZkedsFVŭ^:#bxrSS/F޸ُ_W`W@Tޯ__Oiuq2R+,A;aģ؋&F` "jT\mhBMQ8I yO??>/xwc7û׸ e6KLU7ޡj[joQ57b/yoSo&z-3rYo851@L8ꃗ$<sn_l;H w?I*Yq\B`@ȹBF] '8llXKUϞopFYX F͈zsH?RQ&5VVDY9bDtg:EtɨNdz>]hs%vn4:UΣPyYmg~Z}|/I W(S $Us].G]"qאeݲqtRYTBHes0@Cوg5?._rz7p4S,X\3T;,dg͆=BEJ$ٳ&\}&}&=; yk k<ܝ >3kY$kmg<} a}G~= y98/%@0W0Sy"/R/=;RpJ#qƁF#:i0y댎\It6$EȈ6e <.%eKuI뤭MmJ20F&Mļi O9b1qn爈5~&l1l4rztt?; |t($JIo\lXӊ N&X'eHUdr̦*xc'ٱu>pXNgG3 $.ҜK #Q@T ]ujdbo^Pcp[*PHy(OJg0,&Ζ^E!_ݽSG:1b2g*ΏGx+>2$cJFB134 p٬l , g@d2-iWB8I4kMnD-&i)g?I><שbz3fù7 :|c Nk5l]R;jPs?DEQU|pjp=4: QM^jIys$u"2:Ř'FfO   F꘬g1gJI8 (R)\B*ĹqbXXL3vBNb#:,>2"3㖓a}}4̆o^p82LgH$e&E03QS"*F"h5 eДʛG'ݐ=> \am+5Z(ĄaU HЦQG)qnGl?PPvڬ0j;I㙈*0yT'jM q ՟)bPFIe̲ƮM^9)I*78BFfHZ'bMfpy9q>]Iՠ+Ĺk~떁P~t!@DJ5 ݡF(P}FH<ڵ28Me%(-6)x+#We#@ 9\";lB`>ZL-O#.Ni슋0.;\h%\!Dg$U@(1@LKF5xAH@>p`,Rڱ+xA tMz*\?aWƍv ~|$G({ ϲv _`RHp}Z:1;*Ov Zv$ ƠE*}RKy>O}L\+vJ*Bͷcl&ml<.O&[wxJ/O Fǹ3'!'1|cCՋ&Y~Ջ;7p+ѯp6\K`~?F-[vĵ_~J~< dW:E-"%˵<Dz5q|֤ˆC˜t? /.ٺ::jٓk:y-YgK6CavWz{|TUk 9[mc.*X i1A;hڀK/&\Eeݛ!h;jcr|1Y r 9gNFjUU8JȔ{nʔ_ξHhWnM:d/dLΎC͖/ad4/b'?>/$o^RǝcQ5rkB,BNMb-15PJ(tfNSEJۮ~z^9sRj͚ RN~E? qjQwj_o^WkUSlB&Qqa^1dEo>|%'6,>hu{ٴVgULn6yuQ³Elp& 5N~`zۏ~PH儆uj}}A7OdTU5Q=/V*lM^%ATWIP7jXf{wy6^z9ٗe"xqsOGhUa$ԆG^ χ}?9d\qJ%›)sI&.),+=zC A۠stw%^oi#"t& Nޅ)D+o\I.}Gw5ݾV/RΞ殊aG\ύok_oₛ- |НǴIm0 eW fp<L|Wc#OES (/@MjjJh[{E* #cԇo6|ro3Om0]BA &VsdLK$' K(PiI[}umрJqxm. ǘ $5V@ CHҡTZRgΗQƍejߣUpyڜ*J̬>8ĆGL*VyĭJ$)guI In}^q$x(0K~B%$ - 1$C&+8 7gK[:KU,Y 'hX1ib cJYzNG9k/@Vu۷0*}ԩ:+5hQ 6d眐Ȍ@6`'ZFGLRKNS._ ~be]SqD K `\f:iaaȸǛRuCb9IO&K!њ; ")"bXd2I:F:W5|P(9.>^JxbL>r0q~wRqbZ'%qֺd}th,K ,A>ΰ.iC_,rhA<N%d)d V 09 5Ijig: ;PqȾsi.G'y9Xo~-FM3g֏' ('!䉂MtlDu58[>)$% Nnr$LR' I96P6BR6`tBWUH`ɕ m\65 ݄PW)8W kK\-A:2}&l>;g8Nr0F=2%c;RxQɒsIY$I c42,b $ah; +".qvewlo*-ΠDrsOjg]ydkI@xxBR)p1 8Pc5}d1F˥CɰLI*( a$c;-A]JQV0j }fa2A"DАA>z6s<B{%G'lti7Y7y }I2\.۬|%3yX=Yxo5dq'hrL KVRe48sP7yQ*t]C( ' $&UIrd}DZ6|+zr\2MFwVےnS,n+,KÝUxd7ՎqG`ab`x ҅v$Ȓl5SrbEj%Oiq8vQn@?WWؽGy벟}ovut}B?yui+/V-AoMsP3~ >89A*CTȄJN#Sv"ys>X',i%IM53g砇7ǰ &S0Fz+Ϧk`ϥkT^굌RwXtҼwrV9Ţȑ^[!l Bb$ hܻnYuZy5?Rge0d!uN:%3AȢ2KY& ۘ;ȆHThC\x6m̉/vc.FM ĺi#~kpU̙XG']IhmAFѠeEdA#״׺܂8(~t8M٨zl!RL* I`N K?ZbP'@iK[TEzMq?xRo<&,.d'H`C:#FY.AL ޅ֯? ԝ@Qߖ( 5,APEF'<ʋƽ#C \x'RMJ.K+bƘDPj萒9Ih[FhxVMzʇxdnh%X^h,nO;'Vb b:RBN)W3YU ͠3BjywʯH?T.%~?/Q67Mgd8 m7(~:z<q\BŎ/NGWE3ntX2Jˁ;2rlD,ND+Uыsz~`Q8EtuwxĭMY,qmQR0у< \6$9(^^O*uoYOpYWG\xga( ߾F%QX YAifgaHvKFIʔV)[XTR*qt0` )]RPW#{]g*W|o;u2خOeJszH_O'Mvd.~DD,4Fx5lBe.cmeNP¢u)o[;/TM3b'?>b /z[-eZ|ѬQ,l#ٛq!cm)j:H?MuxvH~nBG e49V5_uoׯ,^Gh5i+7Ἲ(EanͪU A?~&IGP^[^Nm6 iKQn+v= Jض3`ڇٖe;a5~SjjX{'~j'J3::gU9]n_&$ \b5fbQYG!$:r +7V]&ȘiK*hy E$KZ@xN( }苧Fѹq^_]lMW䪣4'gZ؍"m@P9E{pރ/mĈ#ħ~VVH-9$!93p^t01w5T,MnCQLX+S"` .,`FZ EǸ.}k?Pu?롮~؆rD}{;/ &uD4E+M0&16 # * kXZTKV/A֏g^hJ=شn!u8l| (+K/n}*d2/tۧL{D'|Z!y]=d$;Xkpgɏ^V^islT ǾZ9$/ۃ]ᣬ{wZ<@n|Hw3W&gPTkg?/w06'An<߀?AK?i!6Au~a/O]FVJ*7Nx;X5uhx%cSApuLn^H.rlG"Hcr,)'WD_"h /H.uH 9vFcߚpr5$.n?Ѳ*KoJnv}`t<~Cyr4Z6,r뱿][_/ɗqDek|m˯^\彈^0T+`ڨXu`zHEhj(1UسSLX狮Np IUp;oB!1q)! ǜE)]ZIYWwkGsaAfz^FPG'?(p =Q+<. 8va\ DҐ .ݢpmrJqq]᫋W58JXsO ?,xIB릫muy,S-OiMr^ףT6:n*Dm3@QQe0xV)k`?7fP{)ZвFq0.H-s9zGVZ:mTiLY#GoZT AVF@et2zXM8B^`R ]J9$XQF$eo0o:#5 DFM'r ;5tRj{TӜpNY7^F嶏fftp0 U2KX Zt e4H&'x{lB[<.D7; 5@oO4J-" S.)hj4xS ԦL/Tr @W~EeMаˇr1\l@+HAJX̑vT[jGjJD>+eX K8%*[?-$GJRN>dv.:^HIrbcN=2&l٦$UȖÄ(;Zog%>_q Vi*t 쀔ɛD{`[qhw{IVşkO(J9`J`Dn˷pX-QcD%VTB ~FJ*?ue\UVSWWJ5ߧҸv4&h:(ntgĖ$"E~x,K puʻ%AߙpY",z"wҒ`^K>R'>Ytd(-w}qGc7?m DCurŨj`s %\J r!牏 I4>Z 4\u[9u??ݟ'"Tk.Cߺ{0?ڿMЕ!'2``)Jgyi9,^c"b*00*YP^9CL ,S%1q"]קp'wO'Dn%m6q֩N~`M.PKCUVRmN/C}&Txn-G2Ė[%[0r^kd-6&ۜq-yvۼE9}^}52y]D,P"ӅT;s`H䴍;EG2F:g`]1bOBpPl& ri׊Fhreqc*h%{gqoR0#4'(xAcf#=9=+Uzΐ jo>يUN޼yJJ>LS|SWLQ bLxUpiv B >JƩQmߞ3Ou.{!Jvx1,Ar#,wTJ~ 1oN$tBI D+ j4!PcXZ,@8 9Qf#g7L:$f9t^re"pzn}1H!uӣ]7~Q:&("iZR LJDp`2ms=Sh$>F͹wDSY9z"ͅ']9∈dV/0IpȰ GǁS rfP4TJF8e`RD$2 Αt UR aEFrrVկ/@̚o[?,MD I(i-l6ZkB=v V 3?:o5Elg^)AoU.D'M`NF;G>XŽSltAG',v_k&EdqǞԯK;cOJ "N!䙂MكMruWg Ip!b"EJ[H1HIhL!)'QWxq.3VEKUPJfZX-Fjp@zl%Y"/NЯa9$L x(5ڂzsCP qH1!&9qj6rvTw{&qf'/qyHTp0KfN5aS;/E ܛVk \צQRV?lB`x$SZ;f2Rʃ2 ^jFrc2HH lVEBe@^(I‹`1KB,:fr`oq(|P4* 6ɗU =bz}(%:L!;P#0Ϩ~K"X賩ߒȕ\$jW/C~FjfKjX:-2MſϾB3yи6҂1 .En2eҲjZVsS%Z#Պ@1 ,`-hb,ˆ^FcD2Y`[L>DDLhʃ%XHgvf#< {f 皈.'[@~Z|a6Oa#NH c-PHgJT{I3Ayru^%%҈RAH`(t%'Jm 0W9Uwedz$R߭Y_/>;QGd > k,㰚[bq x<'Xbxmz(*bn!{KFgxk;uٮOd&WݺZO{``3b؆M<@rFϵ,wJ4aw6Cl7|{K;x`g>)HDcS^zHo={~uNTk`οvÿ'̿viZb,Z״jL yyP*2n9q4Fys}>W iYA ޿h;gh;jPu}VPcBKDK02ĈQ&V 1څ/B:U&LI8FҎY Rʃk@%^#Zm o3ΥEbFl?Q2p9l)0!\Cm[ZDڧO)j%l:*7|(8bqS,9pF93zxW1;là v[4.E Rr"ZVafXTqKc +"XAPVd{A}cP'ahՄS/Mr0Nx0*(i# jB:*bai`VXֱuX[& F{xZJF" Ͽ57`]LWR4w?5s}}K3yʏs+)=L.SЗQv#/RCe! HA)j=3s;қ@Pջ(\85iatH\*gsr;t hV&Gyl侁*S%0)JmAv~tm,.* ImT 5ՓXRI].a8#Lnh LeK}JgeZ_ կo.gՃE613q=`rlH6.C`5C%E|uKM͐f8,2&ƅ|,*? =nsp>ǨUַZwƾ i}z9KU|%=KJPn.j)M}>K@_j(ײAچ|T0,i* ~Mk*S?]י\"wo|}>xcLJ l1yH`$ d {u[MC{uT ^WIvnvݹ/D"Gb:#AqA E%wY/Gwh^l16gAaҚMـ8)@UT:=Ƨ7(lv6%/Vpstva4wiNJ9%f> sQ.0{-S$iĹ(bݙN+g:{Olʾx>*R9{8t YVua;O#9tgK_44/BOрmP  ODL#A=-Tl -hVXPe1 8e Cv8Wq-s|  "|]ަoxޒ4 T`;tH4 {ʬҀ'c9JI$᯷meV wzS8<X7- {\bn%o;%"7~4m7室֥y[T'R15i"Ā#y:@{QK1@QI~{P|۩I>$^%saP9~ 6Y)"7DYg!F2f$S?FVo$GP"(d,aa"&JKK k=!%YkF x$ )[X1c2b=6MVHKDFrklVd\-=H:UƓc>$EK(%sdA][o#7+Bv!d&$X`: 8]g1=sc:2ZB Z _F]8p+>IMr{s/ʛ}Awr\A9 , "r'Rg ?+RBr"RPBd 6*YmB*ٌJ1,,;b!#epp …Dǒ[fkGͱnۉo>g7Tv0 7ؚ'pIN!ꨒA .0,d[S%&p,*/.{2Yg%eaIl;ɃBk4h(7|L+]d8ۍa.6-ڴC>`,Hhj؄O%R1Fr+54k,9FaRY43G+B xI#89 .]I<QY[k~q_~t!A "yCP-jiHJ(O'Ĝڵ[Ie%BhGYeqyͣ쮔+f >+""P;IL},%"/yWO[-IF(D!S%(8!9Tρ{c9Ҏ]P^a?kr;z*g̍v s~gm0P8 g,J;k#&`ﻳvRY :k+\1B8ؒ,,!U}+tp8 B 7WY\Whp\IRRAW,x8&nIUwbֆl(="N;!~oh:5Dk?6 dAN8?"R C;[m!JA ҃,҇(-wR` ´a`VR}YlzlYm#|QZh[3T?^2ߔBr9SV0V縖} :3(Yn,53=M>&*|?͖q(ILPsUVP%x嵦VDCOdBA~A^e'Ib[#+cCrUݱH<{ڒ0 %Dq9!B ;%R.[ J_Ips.{s [+M4յ-igqp]$%^~b]_ V/yZlv tzfS*ENҔGՐoyEA//k"6 ˥{yVXr9.ґxgP&'|pcpz iH9ل9uM4<ٴYzJn=U6̔|x?蝎N\ʘckhӡFaۻ{&c7bi RCbP`u@Y\diolz[G -?[yu?vuQ]+!Χ@JNX+C墶ԵRrv^ )d@ %q "*Ǽ.JM̧H,–MgZ5'm,$Θ ,x*Djvٕgs ]@Fj3,א X9l\U%K>Iդ zHqL/WkvEZ.TuBu4 4O&Iۜv$)$BVmΠ;c(7nըIg|lQ?hn<Zy-ёb7ʷlZEASA/r_.#.P(~\ao?:D[4w#qۆ~66? G7Ö,=s]ˈ{m~c5V֙S0sOh^T/j%|z6P2F]J;JJJ2AR!tLF8SM&q<:yN%R9pъ'rX"נNŮ`7©ZpU.ϲQr% PRj"Sqn DL!|4'47`hRxyV}}w}MZl!eQ>j3_֮ k^*b+Q= BW=[^nݺ"->u` mKZԮ_ vL땘DLkY[JNP*axҒ}Y\=ga2|Ze2,<ԘtvVOR9Z,K}䈂9;Z PO7pcݎs3'_~лÀ#)}|pu#:A]II߾3!x4yCP@RF9:W1jGv2np{4"^Łk$rԪ Fb2Ig)kq-?k$βt.?.O#bhe8o{WzϿ[ 899^^aTh1Z38&G~wM)PLEk\'.rْF [Sswov{Zo-ϮX_zzt|ii?[fves7 ֛ç~;4km[lk6#lmfUY> q8 `~y =YٿO霘TUJouɶV[*QV'y;HnXc4!\Oboz6zCtК2ҨBN>p~^b e%^h4\d#js_*&DMupvW8I y?ˏg Ag~LCm"R?I?ILz5ӟiSilo4|%+on.:˕T/C !~BZ?Ε5eG6<TH%1$" Av Ӗ#UdLy.W6d;llKUϞ?qá9s nt511NjB<\7 T(FFŒ3NR»3"g:8ڗtbǚ%jz./ w:wUΫ rSiA9~n<1 "J+rJ 1t{us/2h4DJ!w!ZJ6҆W^qS=>k VFYGgẺ'~|jG9##H3Bi X㐩R(:+:U6F)D; &80uh"!69k5I,:䣐G4{#B?>ū'GhP,D!3#J"AP걞+%9#AeKEq Am q$HW!ZBI(LkԜN)}RQIMr{T3T B$-%Hp`I ;Qg EFU !9GruH (BOkB2N[QH,6!HFblFr\ӌXs8WaBܷuf-@#X7e ~*;nGl8LuTꠀ@A@p KN=,q| س2ʰ$ƝARFIȍL>&.}QJ͈Fgz㸕_s"7yY`>8FKZ3'bό459RKėqŮfW"_s(^vec-G="M/1$RXI_ 9 4 ń M2Ake`חD%â ):3rظdHM.J+"qHT ?l&vG(<"e;OV+m(%"|RKVlw ȱ1iTU yOWL19$C"*D?B5{fpUGna\CZl%ElG16*JpBONQluM'SD?I"SD=(gO>ς^gh}N30Ux Ʀl TIS%_z!eI6>x7c<؎Ufdfn$fx݊>rco=\`)t?"Ⱦ+`D}rdta.AaA000a9*PΩ@)U3bP1j1{zJ>C@38@Jcƀ:$ NJQu^O+q <ݙ=S&Zy/ |+N(xH_n'L6Wz0TbK3ՖxWs!и$blZ~ PmOjYEPR." X.9 ` }gb?Z(p9:]Q l,Y $VZ(:9/E&<"ԆPB*6d@) Ki%ķ1EZ)l&|ACTZBٕbPx@IN5E6ID45ҳzxbV.@R9IlrYnǨfzfOςG;{dP=h|u8|" QXx*ɐ4_-eB}%([ۼkŲ^(,,pYgL$%W8;}ޠ}Ne|-A%l@K!%alq'YSLDZZ G~`OV]S~q %i]E9JJF@I@H #.Nj YTdoL!'oj,@A6Ptb+-N|;ގwb\1k, 5y+_Nbdm)*h9Z#$݊v9V1=V*24<2hYEl.)G ј3ҙ$*ɚ4?.<3<ڙTQfu4&=.^Ěc ܤ\gx. 6@f,dM&p&EM#H:ҲU vyeVAiaАIN.)*t ^+mW%ʨ1\BiQ¨9f E ) a-)8E2CC xŤ8;i g|ܛa/ΧB|v!m#t1.\^۬P4+[szزGʄ`53TrN)T)˩筠,Yiv.ѱw0 x \%tX+fWبJ݂oYO+3kY^}hk$D z{L1g ]WE>XAYp,ǒf-K켓OZCD |dH@H(Cɍ-ӮHL>_M.Oa/'o7\e== -'G>֥UU/-Aι vs'ⱔrmBw1*4NqLEBb++ TCQ%,=#WkCu61y*жyst go s`E&vJDҐNVt+?ei?׉_rc~ E*]ނrFMT\ Me\`t2OFu['!U~,8h0Ľlyq}>Dx1*m6{ҩTR Y!Pd%6A {q O,NGA%,"),<0pIUDecE3Z*"sx%j೵JZCdeA#TbBC3qvXՊ9v?b} J?ɧ"nb{2lxkCW"LWdgSDe0$˦8Kيh1zcb=)reLixΦGyt=k]y^,$1E$d?Qsr@(szL4g\mRׇksyJ<pz<^^ɳ|5j;|3[Q~,9J>^P*7??}}y՗m\7\}Q,/}~VG/X[^.gE}gK)+7ϵx6oB||uEu=̿s y/Py}ߋIxwG8S/b[_{p m~ٷ~zk%\ y(f%og巬O{HzJ{96K6传>> rab2$.%ϓa&],{j){_hr|'.pg_I܏mEJp&5.4x\Vؾ1Xِc ߐcCzם21,TSy%Ί}]vIv|itvv#K hּ4;ym60003Ganh;7;mf=R6(T ,Zc̵fθMx=b6W7뻨aIF˰qþ;N%E*DUqgU-gN}/6O<[mc70:Z_+,8ZITY"2Y, ՚&e]hAwx]fA%%S::S>#NiЕ>A}M/6f5:&Nj`*oAm{I\=֢Np5ąU*@z=iԽ{7'1KlZt< quarw; xszgs+WTd&CD-AR5bɅ(kXE5)1m*;G]tZט]Sw-m#IÅR vg6dnl6m!Ég0eh;b81_uW]U]ݤS[Z8+e _7tBBWV˦4S+ͭ\[9K1ڌ!/TycqxދJ"N7?YXDDdOy4ͷ*G:g)9qc#e*o?XnɤkMBPR{4IY::&š8fP޿:K4}l 4',:hd4KQ×⹶FBNHgy ^صG >a FP!Mn yM>Er,ŋ>ՋkOw$"ܧ;^F&4TpƩLm3R'Xكd``^ًhM@$5{OU*RPQpl.}kBS+-S,)p9tBt(jIDHk R ]!'ًju(K -]]Ya(QXWCW 6>Q\֖B+ci~tc[W*u\ֲU-4,Wנ+CJ!+Ut(5oIݠ`k+Z<B4%oS+.b* BCWWP >zt(mJ(je2B{^h<]!Ji[:E҂kQE'ãDq1)1vwGID{~r>M}ް.O_A*N \K.(kf!;:ƚ:Wc3-K8gł,Z-ܧam'?n'o UT mssX$V6OL1!\$) 9PLaDyMaDYJlM1ʆDWXp5TgUgRFd*0WPLB@K$W%kMS4 5' s+˃ 4"Ze\DY[:2ˠnXpƺ4>Q6rMUH f*gr&B+D+I Q*BWq;u$"|3jµǵꡕ8(EAIv-]=ښ ?C5<?]!JA[:AbBu 0=R&Jc[:A+L9 դt(jJHyH 6&BVBWVPtB]$]iEKN( ޣÍ\1*vLՐ[2J4aL73I@kG^m/7Buh[#K'p~y !((fvL1pIGz W)G 4p:RO^䫊X>|`@ W\u"= &B$q5 9.|z=`s ]FS |,$*]SmUjVT'~29ɽR&+K%7h߭d_wG*C\$`pG0)mBZ:^so 45LJLg'ZWg!z ^NQ_K]VK6^ʹ5Ib4`nn6/U ۂ:wA;l'݅r&&qO[ݪ88rA>??<[AxgrZ7Cp`nD#X>E)%Z[_|z1}ZβhcJtVx,EoA[jEK{V ޅFh6}A +v`ʗ ^&፟F0WiTXNCObg}Ouў:ZPP3) "\LAA@YO'((;s3Of֧ǧ3V1-FZ=}O/ٰ uEfW> --PY}=O7aٗ|qЀ7o74)u7\4;<@mY))&CyvY jmæTpx%;.]H! l*!K=Om&2c:ϩ9ό*ɭ5YkgigUjw`qU礍wgC(Cg#-0~q)ޢV3|kԳ2qYApik}Wݦ" ;\m@5v5*Ԯ݊q`dZZDZ~>Az*k<_i׸ʅL%sFO` x%-lP(Ps?lmf VvCR8Js2peF7:͉2e憥04ɩ{#MRѪQTS.Ixí͒fHJ ,>TˆrʥsP0稵:HHUPT$2n")M]rJ,l44zIv/fJ'k{#ȤTF ͤ<*q{40`qzJޅ^I2b%]ǻ"fГ}Sref/CtdSØEp0 Ck9jd23^vƋ{">U)M!>M<*mV|@ݧŞOA~8-Sq%G t?*)Hx{zjcX Nvz \W4+0}1?F2-ch ~-BTXx9^,. `jΗ0/mmKh843PU &+kr{MUհj$QՔ G?A b+,>n =YٿONRRY+#׺JY␣ ƾr`Ѹ8a6(6C<8Pօ&~߭(O/٫č U80\~B!TaeW"hru[Q]T\#Mu.b8 I~z}OzO/_>%L1~z CU/{(qb}wO5[V͍ءj)w"Wz-]f &+⨞%Ԕ{oH$YqgK=f" d; |&Z驃 Kr{5tvU\աkzF2=,mz5XOlqFz1Pu奢LjY啶9ǩI#]9ȚOgW{oeRK8tb#@v2קu}Vv94`oM3lRP4N ʕ9<-fUsA )ݭW¥H46DXq%iisۣ{TLT(aL\՟LYǒo8OGSRsM}Z2pA|N2~a6Ӛ,$gJ&,IJuJ,OM;t086ǣf%:7)C?fF@UBCۿc)P5x ICTF)%utgQTywnhT3]wy.JϻdD7ן2v%|{Oެן4|vJ_cϟ̲ZT׉dYcpdac'LIh{;%:C3܈Lo9 zbŐe.ؚ/I芵Ijtff8 ˅<\\xq% ZCG; n?\>?S9?t~y} ʰV5`# ;v&\cfT#حtcd0G@6OsRVfg07Y|J35譎Qtyw(\085k K^>ϱBpcfr4u Z$42%Jvs#934SgʐZ,jpia15jHW_1XyJ%283p>\minOsɈcɈKF|ڤq^ycًz4u:ZIZbn6RRmDFkLL^AB{=SE/Q`+z s4/Mpis薼Žܨ.9J_Â!f@kRQ!ISC-y1l(85čGHa_<|Ս#R6 .u-XW;QT?>Oc2; =ޭNpE'~(}9N&_ MT&!ǟP9qTN}bw6/Yhagɳ-6D5,)de+K1UVTɐS%g9U(Rϼ9vUV߿8x]4z:QAO y&+״eSU@hUU[x!tvX1wtvk{?\!q;#m?ߨ=G \u{zOOc{_ 1OIVk\?ag٭X3?7! k:ݶ|+VT)b8OF3p) I~8r2 0; ̃ľK#̵,;.-Ɛ{JPVD3u=@fĄdPs(96KLL`,܏stmt](sM^̉_Vs?5(~ʝ+^ߢ] }i0| LSz|ۂ>^-\ɳ&Bv%rE^_/qBؤd՗Ī'%K&ږMrJ XbZq Mfc{E !qVhj" c$qd78Άsd+m?=?E,3$LD@9c[EC>%bW-m^B`d||-IGgL >5/N!ʔBdC %hZ[??@Z|Nm5`sBky%h$8`C 꼪lK8Vrj.b!|D+#By|%H/b}gI ah-Ԛ#lMt`inq+"jۇkzu}Q&"s #ƒmȞU+`K #`l Vi5EvAΌǸ fdN-V}b|),2|=~_*@n[=OmfYujO_חr=m.v0do$6„Ne L:JRf~U)RG_KuLB3ȐAJiK lu`ʓjEW/cCיIC*gv"x僢ţj/ M%fcR{O"kC }V_۳,>r t񲫥yh#KMxoīVnr`d;ejnD!W`(,,\eY&آ0Gv)PEYB*98cW9;KSL,9bhN,BC Lb2̜#Q5Fחn7;mz= ;O.^<]/6)آRbJtutFZseNj"dBclC#ؘsE/(9Ԃ%7K +`bSjUft0sG8qWfmH^l|-DA*y,Yzzlpju5$aNZfΙ mSD&HLr7S[-TrE%oxá;bL''y4=k;~ROE}.*lGAע7v'⥴rX99#Qr*19C[Rzed.BxWqFGÙ䅻=>Ba3 Y58%x1:܌/EGV?sבVbXtdAYLޚ{/4]zo77[J+6)cku (&;U-l[ԯ`|m> 5pqL9Y1Q̑\HE,& RI;jJǨTXOJsjTcNhRَ_pٺ2ΫN}) R @~ʇkk _k }=)jRt ]uM$(źcqKFx}y~ .WI 6;hYQRSp6*%iC%241䛋'i^~d͜л|@ Md]*RapbfĴd3;.WqH?B!]W[qnzW/sSwǩ>o ĉ`D0֐'CטQ…i,TB`.gL⹼yKٙzx웩^LL H16ȡD R;t_2ϐz-]FJo9sE1JlVBI #QG,6ZL ~`-vcp`Qfvgt@؟?0r^{o=,x-o^y/YbsxWFL'Vaz]IsW&:t|#.iS{hAhw{~~_V7b~YˬoƸzqv]n7<nro^%N&8'9nX<'žHmYki:B,THnb{SI^\bY<{6'[%ɔ#iJ "4Q'O{7w}|/?-ot} sRʰ,Rzɇ\ՌK&γ;\|?b=lE]ߨh:4&{](/5+<Awx$|J16!5ԗWׯA/K~* ʿ^gC_~i E_|3iܭ]Y}x{ٽ?rՋI! 3M\{ m8㴃+:.Zxϐ = TꬫtY9zwuŞ!0% ߏ.nčcg!AbQ(mGyV5Μ}?\xYnos鍊$[5<1MėɎUh(|Fba# ^W8z$mSBaҍLMS }yT̎F!>W|Ojc'.[^ \0)}4odJ3?Y_"g}r{aL0=&n[hhuFhҴh t֝ޟ>ǃ?~g&|P6wVarxPZۻ.p~2g/oj~;^?_T`yah\"-3g?+[Q2PN)<Ύj:~VhNc_~z,Xڴ/ŠdԲlײҲCCqB9iUO 4^M:J ܁Fo$m4nHŽVHGv"Qڠ[="nZwsw#] 1[CWm֙NW@i#]y-؇+ujk:Zq;J1;z tun2(trGsL>{Dٳc}6&]/6Igpg_~O4?{,Rg֣7(qܟO7avkYv^Sf"JrGcVzSPщl lJ4#L0>;iOglshq-ob OoȘJ)+pS)D18o,'$A+"Bo4׭KrA=GWJHP1*eҢŔyN-ͩbcv*DRH]\n rg;&ZWkSL*F1ɴBL KEzr>|P,&SXD)E7LbTA*qJ'ZKF,]XAgF36-!1dP7&eZIPtL9C)Fe]q+"X߼֗?k(26!i(r(mHsi{eXD. S+ 8} ^%&ּk94YJVh~|m=<`)` bwWԺHPoIzrs;3,%mْY(O`ɐXlw5Ĺ.Vkys^ɩ[]b9ϩT#j&xլɵ@)jE;j}_4?&@P1;b5} HHI?۳5 ҲChhdLʦ V/HVŰdDHȜ'cRGڦ JP?%3j^&gY=e!kՔŠb3L =7FL.9(R6A?hԚ%E-U#;,xs"D y;a*V!_&| E2U0C` ,ܢ  *Uh sVAkzpAYSA7jx(QrhoM8T_Ej&AijTU<7[S :} ՗tdP&G͐ B@8Cs*Yȩ ՏU?^׀XEUY;¶MfF(NA*}uw;{z7GUEq>e-N}}6B@Fx4=\LW]\2bToD\ ^B [2M| $$B/L)ڽ,Ck5KHGT׎a<,D ϒ]IPC`6dT( @bdMCВkfC y2Б%Yc EQIbD&U(M6@wpDlExZ4]CQe 1!,j )fla'r(@J?z X#,vMkd gcTʥp#SoEU^ pδAF(@R y+z`O IEzjh6 uhcGOqi!{)L3AC7H7iM63FJ`BIf(TTk̺ϵFHa#!eP%›Aeo?O[CFE >N y XmFJ eCkŹD;3jD˱ TP=`Ͱ`H :,- = >!JYHu X7u3,mBė,"h$K\*J#OA#^G.# UA {PhmQi8UnFB,: g=guc[rO6Kt.v z19dܤxXH5AJ%=$SE!W8$!Ts6'k@&EG5k !Ϗ;[i݇}MKp'ZyiQ` vRv̦ZH\haQy!?MД nFA]sF K7;1PU.vl/fgjŌ3)mm$R#RvS`[$%NrE=@O7^2\8Qy=6z͖u94j67pi7\ˡU84jVU"!_]PWW%-_Aw6/uhB{gpgq0,߸TGT/z1_ŝ-ξ@uCU2”)JBI=Y-T9xЃME0~UDQ"զrOv;sqjHu/|,6#OUXt#$tkBZ}p)'F`?I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$x@8,RJ0$Nh> NqhI ;Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'FO!(%:I La'PjI1&q@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 4$P.JIO$.Lhu> <'MHLI Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$PI JZ^<9yx;~;oqWMS6'zN}% \J6r-pu6F04ʐYpI%mz Gm q+ *w]i@2 B28Nut+u1w]Su5F]ymap}JR.p \WzO!7 =KГ? gW/&W5shBBЧRJ7Rhj{FIOH$M#n$i*)-Wc4fWm'z}vzY>Yz/8λ zp6-ǤO.Tx'Cp  hWgժKPע4VAG`# :jy ܬ7_*6^;3f^p`ݺ~߹3rE׼~=mG}o7>W?|DST (0U8iEtHs:7!ymke;o ƌFya*baQ3߄m$TX:GCܨ@dȽCJa畐&F-*>0tZuu5F]2 MFW$]!~ R:u9:z2UzBJźz>=^O2 Xj3pUHWi1&:AWuoKt z@EWHPjYW#.+4!]! dt. hT )d]PW!]|* +*Z'}BJXW#ԕ1jJcW, qC+CͻJ\f]HW`26O=r*p5`KG?n-M֯j}V>4կ{8QU'zYEZ5żqYwS. WǥǘN*}Bk=+L]z^Vl弮15[[rȱVVY1W*u5E*mqWd\Дqg!Ȕˆ 4"HxqBYAHW iJ/>F]`Óz+N>i ]!m :BDuBRui2B@FW@+]WHjQ +ĵd:HJyBѕv*uqx]%0P{n9h@cWi. q6AWuo5)]W{*BZosPv_ƣ+-T]!R źQ8XMFW;-IQuúz]]3tO;~**:?n+UwaF h%i "1;c4'O*`Bi#k2O*eug',kIg"T"3Ј.P) 6d\(]n pI$%QJ 4d ~)54QWA)Cj [CB\' i}]W@c]PWЗ 8j:ά "w]Ee:yFr=M޺u. 7ȁLUMK4%ʱmzE )2B h4 )g]PWJ=!] EEWHktBJXW#ԕ־anO w]!#S]!mt ( +R*OEWH;~Ri;U&䜘3RMD⽉Ǎ;!l6hI:$ Z\T5-I HHnВ`#^փ/Ô;6i&GHc H1Ra֐:2Bڐ}/(ЬCHQәPJv VEIHZHKFW<Z9rDi2FhmД"ppVg_]!e]=]M7~/4F! &o y'ʳmz!]!dtNSAevKeDWʙ ]!TtRuʲF+휕+2B@ZlBJ͝1ʸh"+~4\Of h=g+u5F]ݕ\Q*ԩCUV+-cl{k:r-I- '(nВ%/cT C]@F\KFZr/2.GX ;odt6R҆uq0CHWGa<A\/ h )b]PWУ c\j2BZ21 =FIHW15Ig2cWHkӋ^BϦSvp]%+9iC/NF;di.ڷe+BXB\g i]WHIA82B\:Uu2w]!e*(G4 q i]lGu ҳt+ĥ3!i}w2ru|t{6} iB~$ਆs]cUlsᘠȺڷ| M$+ĵg+mu$RJ7HW[gPuQZ˺&Ҁ q hʾBJcXW#ԕQN` q+:w]!%]SWAN"t!NUܬbS*{Q'w7hI:Y?, Zbϙx\N-$Hjdk=72]xR1Wܶmsؼi۶B2VG%MllmU窦)뽯ySժ%~>UUh! IG`khξGܖ7I x4#2e+5.w]!s` Qtt> 2w]e $j<Y)MIW<"JIZ>#*_2B\Gf 2Ծg+w*DM OC*ՃvqCɴ~%*S))8T}uuZ>M7л`Z2Tt]%SFϺB`4]!Tt&+u5F]/;X\TuKX."P2E7i˫韖pgy~ q^7_zw.;w2?Y(/_Yww'na^}~ZmWm;!9kgEQ_-ſ'*/'{PNmC>yy)?NV`>MgEWjrwA.?|OKH^֐u2p[%YMl |rkЪ=yߙ?AME9c'q7 oDwɏtnfrӽ_;wnznNlw/ҎRVd@ĵd@d/TJ W#u*;ˏUԳ,̬ E,)*E̗*qu֞gWkr,\{_vS9saV6j&"x+WYfrz9\g-U=<J8F%gEy{G%$ЧraiqW9DqvZ/7ڞXD^NQ:!$4uISw\VWou;?/hKx@oۮFL=.ee&}y\WuY7ݝOeq:۱.Goi%8nn>8 |o={؏0v}qyEΛ)#5}vSC|l;gVsUTGEW?<%ZQ|h{=x[wpЙ_j^P=㋑(b0% @=#ztя 2Z yTZ)K&Uu~UV/ՆXQT.AًӼZ0v@KmiN4uAMDaEpr^8iEtr: ?>O9*'_GYvv]F5|̨|w?Bq}Y'7W~O-6^sG.umߝݻw.ۏ~n̍YU–,LjM]C0JF|ݿ^w}]/bͷ>/iv _-G __^㧫f=v˲n/㟖orڭɻ|5؃=]6#QI5z~~7o7j@gTWgJ)k[.lte\#Ibq- 9M 6HrXŘ"X>_ IQԒhNl˜awuOUUWhƹ.j@7ʙ9ـElɍ$rxFeo@k1%ty< ͥ8b ESG )4%\J"/wPpol()sxeR1Pyr,¬)JQK??{B7]~qP)%vWBzu1S̿jկGتu?鲝ۋndZRhv93tENk/7}ǣɻ5Feon9-C.X Ef62229}Q }i#f-M]pڭUÓBmwj-9mP6 4YDrdIqY,@ [-p/=v E.fh_Ɗ/m?=6/EAEAF0hY"(m64d0KFғV8amiCI %|&!ŔLق$ j|L"pNBpՑR-}3$/, I#H $9KD5*w0$ˋ((eaf3B{B%]x_|܅'_NgF27*@NZ`J h',:8)љ48uHO}(䢌jFcMW ~_ Gq@g͠3Mx>'%h$(!o@9;zՎ\ۦ3?:jgώ6kb ҪK%f:0mҝb)nyۢAigTzL\Lqe,}Ήtwtſ'dzՏ(+` ٨KE].thH/6M]:qJGw$` ni~}pF.Dc«׫Sa3a8Σ~nە`<QkR+غ _ۓwt\ۍdjm7mLI&}6MY'b>gن(wlsAuXռVk4-lt,I}6v6fm (4:?5AaX:Y jaB_hq'oh4%+S +lMtRS0TE? OHJN7O뻟}ꛟޒS|_WˬOZ'G /'][t͍ءkkz>[uk~-7ߔ vGc] XZޏ_lG7Ľ8<)GOΑZt)Cٞ3N`IA(e-q&si:K!;`(CC5Z&v_x= X Fg$=HTRC*)m%}4C`B&{:Ut48v)w} tFeXݟ0/t}yNeڏW'qKnG&e ? PNem Ɲ/'JA 6I L|d{A7EjLIFmDfѲ{2u~Hiep>bX-x<44|6y2 =Yk]J$}F6Oa"zI F؍t"mG@V=m·r|oBOR1LF8hr!,&|=k>! G\oֿLx/Mcb1zK4.Os:Aag<fNU ,,QRzbgθIF&ȅLӉȑӦnH3QZX(8P"&heY10g=!UQ誨SRR2a)E/Hhu#JGUuVg$ȍ.{`ܰ|U?_?>&3dA1IɼHGe܀D tHˡ6Pdن$SLT#CLf&T6jl7D)R~8vӲw~/䁙FÓ1fӫ@ƚZo=̷;䬋?ӷ69'Bl~zQ?>S=+Z=DR-%*XdGm4C'Ԓi95aQc޳RM3$ד F꘭R49  erR=c5q{zX/>u¥Del$Dgګ6Fohh4$ Fd@13QÈbdEk F'7u: ς=q_Tq\Qo+5]rflFa*YЦVG-q{l?/^v Q"+'^+s:\w[`LiA8I@yWh}0"@dBBsGC{eXMw&zԞb̍kbմ~QT~q#7ʴM*qFJ&#) WHQF PK`,:ŧ{ÜkiC.lyګvκ5r{ :i]IpǗ~T<ɢ|V`D󒦍iǨp7ylX? {H}|DT3W9Y;}Zv, A>X%x^6<-$kCTrS%UNBRjkqMKy>ێUiZώ)N쟯(bp݋ xj*ڤ4FȜrl!`\\+x)z͐9- (>%AcɈ&L(@&-hCp£ /]]MʛݧW.*/e+k[ʢxW̴ [R\jY+auT72\;qjCYΎT9zoH&iLlTI$+o$X"!eu)4i,20$DiYEKU!`&9&%m3gye;&Ύv/Cߠt9li~J0F{Odk-!0<Ϥ-1QT^n `5MKOͲ,)=8T&ъ@L)FdĒbЩC`@}~lOTpΏ niU҄[™hs09zޕVIb֟j()F/>}|}=E]셯\~ ;$^S~IiWx'6A  a,b, #B R>͒1e,Gdx "0d9!蜠NDbFq g-6=mY{ n;hB[cfPw8g!sj CUNW[ώ[ {4ε4g̻sAc]u9ԛ"Tl#J K<J#UDEUGgisk4H rH# PDh&"bh+)|Hc9KΑ5F,WK VQ[En͡rL''eZ^,W ɱˑ c-PH{9zEPg. % +=y T@QAsglrD`pP5*:Զ_k.7' )-ul2gVȋ2"lS)β|$˒D-ƹ%\gIǒlԊm)* 2Dv֛dAWⰟ}Y+~n{?L썶~[C/BKrflTu˕lЫx.KVrF54c 0 0GM,W MX5`yΨj|||&rvpZ)nq (& HM\ =k! et}2i11%!2g*\`<;I gzRυcUz ?Ñ`#<NQANʤ/K̟SIg?+^M=d?U$&L=.%<&{'f$~q)BX>l Kɾi1+u ]Bf<ۃն tq}+b> ߒk߉f# e(+!R1:3;3 IWx L!M dxz MQ\j4R4]Js#l_h,11`Y3YZu ^GM/qv|52w@bܠmCЀkr~z3>WmWɧcR  ɚIywUf\[yzZ3, Iԙju;{C4ڻAVoL+ĸd5.^9_>Xμܦ@"]ng]j`g#Q[` ,:kXa)a TKdA' iִ>0"Z:bkM(.,=2 ie)ʭq=QHɽKiZQz]<7 pkbp6.'\pub(I]T#EUKZQQ̣21mIK߃Rje,7ၝσXjraL~.3|C0CC/@#WzցLarQP&W#P_<9E(~H E-\H@ Єp& ^0nn[.pA ˫z͐@:*tC}1~T<}P|~uCz'SUh0.>| {kZA۫gu{pcv>҉7ˣDv~c@T i B.@L!s/#2j!13b)5(OT]b$`tC0i!V5%hX%pQRR]{Tت> niM Ndk=)3իr@(9? W0NyH6J![X s+`TWL:u+Y^=k:^Si=h~"JGfE NZ,As18✷ P0vY4a#A2 < Xe[v|YQ_`:L +xXVZC55ܸ8vqf*Б 1 Ub'~ `7-jHE>ג)E>i @}@ڊ9oKUlCE@_O&O x.״ nC&ﳳ0&.-)PχIk7.? \9x>Hԗsie0<4x}%o gwOƘ~>dCm^u [F/Ki RStm Tx5*̍R^[Ƣc\h<)npϙC:jXXLj+D0U#cm/jKۀLLryå TM>&9+SZI}9 ݭ5"s76o ҫm-wPRG=p&wIBGSJmKȵ1O^'&zGJ=?-xwfOm=k_݆Zz l]_s-__l;HHK L*) [0 RN%`eZlzdVY0 &B-RW@|ږr5ݚn0RdPI:u^r.RSN *Y]ݍ\W]ݍZG]݉Jf+|u;ucb[`֨DۢZ𦫫D%Azh"u̷F]%r)udURN]=CuE5eBn9[ rD-esz>4w9E8._WWA빓ao\t }N~AOWhXV7}R 3ɠlU7.h]x !,@D.ٚsD-G 3BH]%uȕj[UVoF#Pɑ3TW *+ 9IR-*QDE9iI^eff4Nl 7 wREM[[{MϏ<21%!2OErEXYD$j~]=\HĮA ?pxܯ$񎕩.:ZP";tc)@i31"kgȑ"[ 0,=f hH^=dWldV˲DٲHb+Twu}b1qbNqżT,ߥLk B ONE uF+dY/tw-dh1ony ygd"P &udbU8Ёk,rK.J{K.ʏmK.ΎgKdi/Qn:R}c:荵&EOUQiВ!I/w9c,#8 Tlz*/\BT.GW փtcT{)<:~ pb^>#=OEq|' }*S^j+_WSdqz[|S_}_-+Aqym#h*5ӊYTRDS LajnZD O,dY,.{zS $#NY z'@H Q0& 9׌3e EEj KYbiK %dQr Y1qڮl;ؠx>li6( hﱳ @2iDidÉG E" 4/OEG~`$Sq L FS(%EhP( uP 1?xowgjÂ4t@ M82l.1G1|OZer]{щ+DɺZFo|^ mz{A6ô11I`"k:8odJZI5 ZĽ- $EDP7vXmaJ+"ZGkˆyԚ:\h5AxgKPY-t8J ,x,i NW BX04*c%O~&Do6s`oxZ6 5I5)g;h AVaF'e]it\RlWbTNq{6FGѤ<'9aD( 9qRXDB ATgcGdcc/0>VmY8x^ m婊 nf/U ?pP5~.=kǣϣa5@8 %XZYx%D e+lLR:t&qKb GL%{h<툐e;!K__!n>v6B{CW9ͦg6͘w{JxǃS5,iwJK9rZTԄЂ@p 6jƃCRj^"2yAAu΀g $pTk$F)vWL-x-zv<.=Zz̲Ah̲Sr1KlZHaR"DVE*9R{ p=II3:%j.M.ӳlX9ޯ75=W}qE~wyO6Oד}m0fo*Q6B0P߃e BWRj O@& b##d]> v8j9^H0EXh82|^p:E*(4$^,Qkd)X*KR@ T.Ž8[D|X;-l~ g[T˭Z;dkoV 6DHn6gwU˖4|D Y2xŖޢF:捻ZAOz?,nj-<e7fu#կ+A~G#Ԗű޾[||Gi~ )x'+7^.'/{#~~w) Q<\^57/yoc7(q\8B-b}Ǐ ;w8g6wޮY-z-z-tjJOkHJ{96 ؔ>>sXl>)wixg.'>rzi`8԰ w}G 4NZq9R_\)o$+rJyD! Mq9ʹb nmަ _Ë :; \M QC WZ~ul6rͫm X 4\;M*ìbW1 zlwzcZM,WJ3Z9/y62Yt |wA X4^WUݖ]z 9xԎE y!њ5)b`\4%sIJަюJC&1՘Q>rS7󖎻>wEcHqeMt6TRNSTB&S Adq$±Hduu qݥN wiʳ3->"GHa$ǠH=yP, Bq$Rv@sJ>K,h`)xk"&rҼV"70+&A~+WiZrqrh?7gVCʸ ns-%T #jZupQItĶDGD~1̏q)C$B@cyvudB_xQ|@+!7y"gM{Sxy4 ;{|~i?i9?I).UQ-E-"%wіq8 8c/02x <;⹛4Χ_\,sKNԾa2`\'_oSz4g fZק;x8;A?7Dfz)X/a.p_6fU%y^ ]/#sN8"sH3§XJ%ҋ |P٨%>_%j)` Cgptpmrzs]Z.ܥn9BOav+9yK^нޛ ㇇`i]7vEp$WoK~zض}pt1lʲ[zvM?-&,^ -KZ;2^aҎnbp 3 h 8UGG懪9B N'Ubv@hTJb+/E%y 6OS*EWE1,Jhi .CtR+eBL%xNXy*EWzqk)v8kOժqOqǤ{!!ii3?׼UAzBk* zhYm&Rq}CIr.Q~Z"-1^j赨]__  Ə,:vH끐"TO:]gVe.W[gWIH/YtU"X]o9W|Y0m4f3=Ag-{%{fWGԖ"m9~uSd,XU,V-C4eǼ:Wa:(:8윉xG_6bjgdR;#ɸ *\A 1a ")[E"ix$ 6Y(STBIN$9`lxq;- 0adRxCEXtOqzV7e\>4*%E%pIy'xjGPLRқedC9fV̍u~awg^.XE%JcѴ޲:"/}+"{^Y+H*Ne(fy;oL܋u%#@:ҭlM("Kr))W,^` XzgWښ6YH2KQ1o(H T]JŁ3:e$bVaq3`]HDKSWI1H < JDVLA0WR[KklǪW^h|1ZKXY^:R!pהbžhzsfF6>fthգ$u߾jU'syj, <8&bYf4\.Ok?K]gtP& cJrIKr8js0E/˱nl] hr6RL>a[IBd%Vt 6m&5':u?\-zvt|qOVQꢓ}U3Vgk6}r6(㬌*l:J%^WڊZo',@WPq݆*O(->2OfFq^EKտ>=]̩uq<^ɟ$滷}߿{Ϡ͟߼[brc=#A_NgM?i7Tޢir5 o]i/wuhղxZK=-q_>h^ ᩭ4П.1ko H9tIi³&ް9*S! cEE"JT]KXg"HOmlXKӮOhp;뜸ְ|jETY0TX5jr3șNEe'~|z۞K=dgǁ?~ܝNWbmyv A }kRW>m~~O'%)Chж8&F@ÿe>Tm+ֆwAK`?lDQdDОx!u}ab3&Nveq]v]u ׅO: ?~RO۠0Af;]44TŞdp9B+9a+dΥ$7'\"O =ޒboJX!go4odvNHg Zas (Cp e,"%g PIKSAz`s0r6*wH /:rl]>!nR$mNRVJUMQ)"4Jj(㰩0-мYK[#C H "kYS *cC 5X,e9'~ZW:5l:uS*uj͋Uzg5ߺܙ'/FϻU߭gk|dGQqkHxdePF c{¨Ȱ3(2jkJJ%QjkkS!@mʤ&-EzHM@I_FY*a`% ͰpIQe{Nֻ#Y1\=]Ɵv2]|me`Y(%eM6tJB DPuM+rQ$ P6Q;Q"y UI[AdXe* J%"Cu EzgtC;Emq{ >GtDE ,0fQLX²W &Vv] J 1(:c %CEG^\5)1%0p‡zTgx89Unw="E "uS=stqD)P $ ,Z7ѕa%Qs?nJ7F8RVf56p( yBIFa8TY3_\dKE90.xnTxW1ٔtV=RV2( JQAdhA YchCqǶx-–yW5]v(\`W΍v nLޏɥc:;iP2zsgTg Zlg~U.G-Տ䭟,<c~|z6z$$<:?zS2懑=x0i#O&xdFz6]K2 *CHa!.@#qxiwRX=ϗs%FI@+ᗸZZKΑRFgEJQ*MQ@$F:A,Dq$17u$$x14D 2EEJӹ82X;$Y:_> :.C2 QO ]3lbj]Dbn!#PxaM 1DRS)֔ KZEӏI piBb֌`QN8%h|q~ceaڠ%0X\%]J:4O7{I"C>0Ή[IxE!gEњA8DkHX7Cat w:;>6;y[y<ֈ(|UKƩ!:DF5R @"Q䄒9E`T)rPdXHN$K,'@b$]koG+>`My @fƘ8ĆQIA{nH-R{"Z7lټ]usTQ} pMNvӾ"FnJ}Y$\ P4-+JBu$$xkCuYvӌ$e^0_@ 6XRE +]U`Qn< p-Ρ]18݆GDiDM2U \.JM/ \Fg-Kg@kBjcluBGWP=e X`(J/ .X3ܢllB7BJJJ#LAdFdCP 8pq`V$57!tTdL'B Xl* +u z\F[`B]Q[>ಬT ++5f AX @C9fc3qm7C3a?%)_J%;&|,6kq*9nPt.J  P Sќ F0GQF ͨ-;[ Vkd5{TRR}JuuLcr^5 5$Die"Z$$Vc@ Per9A!ʰ*4Q]j> \FҤa"iPNe:ڋV̎&Hw:Q48)*&6O/EIJ!`B6}o *ܹ]` r|%@6h ~h4QjVw]@ nӅHB[oP\ፍ`4d)h+.$ @+Te ,c*$Op(v5k=ė5XH茼p@4E"LԴjPyU(|+/[CP>~Ox Ӭ|Y%P Vjxۡ28pG߰`Lt+ѠKP-"W1hFyҶAMR tY+I";> Ɵݟntq|+.N&!The1+xm e,!{إ)G;| R{ЋQuD7Ђ%:.h E\&#Tug;P P Q{CHUf}JPD;뒄 X:!t kT([|`C"m!Q9M(=wXhl  D xd).(Ψ 2SQ]  VAgUkUPaTBH1#dȳlD'PB z9QbզX5Zl?vCڲhF& h(xBJMUр{9BEY`A-AU6P*t `?*M-J=UK Ap狸Al~^ͦiż]`#ewWi hvV$L'Kic+q0M?3;-,zWm5EZSRu%A'%֮ dLB9:Nhް-GczA*WRRԠ*1J[Ƞ r@= ZqaKh*TObE]Q("Ndv@'7 ܠ3z7,aah`R'J,*AjQ$Fb1wvyU3XW QYjЏQ348)pc#7 3 &E2| R5zfҼLFLFz;fj)څZp:-QǽکG|5у7[@[W%kKm.  G` 0` iaSP/L ipf PH(G՞ +((qKkCW$F*Uk. !pކ`ݨ9jErgۥ!R`PrAuf5ؤUE!: C$(jI- UX mRz!w97 0P5z_O̻v.n^,v\&|(S4YWJzRh jZw>[b0i5Gj+ne07s|]f[Q"k7.Z l''? dIyƇf+^Q:<`v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';"}rѳ=zro@ D1:"=`'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v:"%(Fi8'PzoKo\" bٸKzcP=Vf {5} Whe`߬;Qh^OY #g&woO|WW0:#'%ϟLNgC!,+~~FNY dS%E4F1^\츿Pѧd5/~ɂS$J ?I>;؊wHÍ^y:<+iMCW>WW Y\ͥ7l銱qcxw}-)5"I-wy!1O&˟f''mK6ec+d}jفh)ɝ AOӏNBsn^)w罺m M6i)"Nq)fT-UE"n4B{iՀ/} 4i#D#I#`+.E~(w,1\=VH5*>t(azp~{Wߛg V *4#\ykԖX/~nWgmt>Z>Ou3l~>^ڻz)K|v&uM6+&t4(ЫUfW2n>u{wb<wlؐ?&7Ymŷwo'w\R2.-MJS:ExWix[sBy&:}sc ؾ׼|X\4 yύ܊ׇWz˿hݮݞ\Mqrtے ΄eAS.ջv b}T.jEj|Uj{㻜qM#qYQ߻( {GE۩pqYB v}3 ߭~zQkvnwx=o^e"U]ǟ=RwY |yz>"\(wyOhCS@~롥i-=.#pd8og.4c4^~dNS-zE޶\{h~45_~-!ͨo7 usqW.xG^@-]\KI4|t Z!޷^dN^]&RpV͗'JKlssl0g{_#k 9k-ZL9]@T|Ѧѩ.XRcH3ho˳o^l;SSS)/ju%=3s}qsa{ Y4POU闫7[VOtΦ *.q-wiv>zEk~VB^,-ש@˕bq=Tْ/J>Izqh*M]φY݌YnX"O6 's'?xwuWCE_P*nBkJ!}0F|/QU, 4aV]t\Ba(.iF/˿A20<`QgZ'4DM]U(ݔ`! :5*Zhb6 cjD`-[53\g^Zw~ &_~z<؁#n> ሰw.j~zvz:mO'@9o… -j"{S8JM9* t[J 88={[zBRM'q>Tպ* ]VZ\t?5[mҠ YfyFAS 121rwR2/J!O˾,Fw],!6SI1G gЯ-8TPEVvq g9_iz .v}'3>O,,~x=yN`?jݼ 9WB2o~n}NIO1ys~>g&TGA$90BOH.*hS|f ~`-,~7 ᣏ|;1ͤz19oVV"Dޡ5iz"qiIܮ_LW֥IBMޥ"ɗo"C|ӭe@ay'7ymG^oq~-g'/K|Gnj y,ȖͶ_l~|2OgtqO6QiG;_|tx2͋mCݕ|`󖚗Ht1%wvm}p-psǦ͝}]omA+y,Uμ=T^vjȍv{sVlǦZ.÷i^&esB1YDO8Vq?{׶F\EY1Pu)U%Q)4I Mh^sL]]ϩ%^}"  z ww}ae~ѥ :9ؑ3mf6AVؾ1s_[a|%ӷnJ>S1TSKuomgk*ߗc>Je{RiHEܱ?~jAqٻQLu6a3-(x<,Qzq4*]%WO/@V辕];;qhty8bթˇc1'Mf}fDŽ1xօs b\-RZki2d/y]?9y]k]TǭEv..v-QK`}LƇ޺=|WWpo{uuTn|u3:x?6:C)jFK1ŸD^{NN8.bs߈MsaS;s?w<˃\TQ̶6Y{Cl%+Q8k fP)p|/}SE<`x_p~usg] 8)6q}U.G_IsK4Ŗ?B&3.FI`Â޽uv'Yu{z~i1qm(ki.Æl< g<#J_(m3ިmhxw u!X[[~4lJ9Y[gdU+j;Dpjh&ts+Ұfڭ4Z+ؙZZ%'L|RMT(QDNF[s)@8|BiJj:w]M6٣v@` V9eDc^F$UYgD :!)ɧ3U D]| ܻ^VS󖙻wZ4T6 ^Um 5m#>Ȫ{r] Ki}mτ*! iCŕ!svI L't^vG;{=3uL1F՞|4+S+&$$y} 9!XO[_G9/̘R ~lWRgm;rڞpo_|GjUħ37glrkkqׯ9AOo_^z7 51.UUH% 5N0x"ًô1ASjQhBv 9}*PWv9k F,{"ŐCEsHlU߮ IdQidhCT Q_ TهŠlTuΑz6jN#ХcT >&!gyJnt)ʇD)Ԕ۫g,2G7Tu >U-N xaJ\ `ЮU9r’Wzt*Z 3b9֒?]u!9dyljڟC79NHszƙ#M/7t{ɳO;>Bk_4r/SIa(%&<>SRNˇHIޖT ; s. +!]bNgm;qr|`Sc5iַNh[Ԥ1DȃJUU%GTmՋB_Ns1[ |26SBc9alcO .Bԕ"ۛΑ% LGSRY(oW,?[%ʢ&WO*jEQӇB6*<B~kE|a\ׇ3 P|2BPSRuA>8E˓jaLK똶Z]/9L61חe$|^V[,Bc"0%1RcPB%e/QQT}I'66!GWX3\VN:'&Rբzta ފ~=}HwV>4<.x^137b躮rP*]-*A(OmX\;<]Ĺ 5 "$!`~فNgDž>bŷ_oF,=Nuwzu/)-.Gq%8d.f¨"+t5+$C]ցR~@^:l[%>rO??W\h3_P9ŷ|s 5&(\\o[ل.?' oםg?<µGgLmq#:ʕΐ w7k [oeumZlަv鴙V_K󂾖fky8l Pvq{k'½hXޥ]pI]e.[[TQ\,8SK^{ ." %GZfNת6Pִ~Si;%' I?`LL7r?lO 1]ݶMRz1ӈ \EQPjX`Ƀ?DuhxRֆT =ֻxk@XC^^_߿vFZo/ِcٖPC,s=)LHPk2c90נx.RzhB&ORJTi[%dS2dȑДmDιSXsx;SZ$+}cV<[qM sL?C-}/lu=I7=gtFiǡ|?--r*H1VJ'ʐ ֡dY6YѸM$'QX % `)Q08q^F{ (hI$**ŜC$۷S`NU@udlTRQ["jSIS4IPoR:n9qɞ\)S)hzRsLks*G['`CΩଇ&CI ΪAE D#?;~"NPĄ,J8]V(fvsD ,8mzz{h}2_y>#L>;a˗.Wj\vmY#u\GIt:g&:H)Cd NM g`cD5g2/="UAL,՘jS`٤7IkgS#2vm^a~C:zIƉШШ>]׆wL[~9n(<`H׷߮?F[uFeXCMHA+h]kD\,> Z6mhd/(jfx9R+Fa`#kJv[|{Cs*Vtj;10[x 8cȻ\5BQ*^Bm*X&9.ٚMJF3aȂ ]DE2R&ᕪz7"궝p$Uq3ScE|Q z^ZEؓHtn(TIJTMTjZWR %־%"j|Y6$'`6;)`eS6 &%Vh"va3ˣbJ&%E..vq/7չR/bBTUh(EHQЊ(V6v簋aνT{:_4=6ؾⁿ>`4]Ͻ74eIpse?zؼPaA~/dFj0t@gyFQ'o^=OW벃bA}i44(-h.i=zjVM'| >IF)Gq&D!v9p*P#U8 F1#whM?f^{}{icyb*NE#CkUemUERڤD<r6L%iܲ6JW m?ig!gj!5xmuETdIqMaRKq9{/(o]Yyl٢u-.42\[\YTCchZ4'uv7yFO[prMR"@u\vQ%K&gz۸_Y䥧8]@R7m 48@Ӣ/5^mղj4ί?ɲx؉,3 3c]H. 8|pL"oԢgi4Q$ dSTV~„x-yY,aKO_ pNχ|oUk] .[ sKr8O,3sn5X:ApH96S'p\\]Uv@j40ΈtNZVU%P7ZOHt6_Byt].ٴAny,SD7KPL26! W;e1[ԏjÃJ2;֜沎U+$9tGQ_ m}4K@W4k]L|[~~tU`vFA.~ Rwr:[ns)NjIo18x ZJ nt [KF̪|E`X'Ͽ?Rxn1sDsc̣YQ 3p/GA4EIs [Q&|y ݟ5L]jq>zd=>yP.}>OHUb}tg6}HB*ϒe D/l ɻH48*r<ʆ\m$y|6gΆr)u;CZ#|~&IVh3\^F!)#TF_1s{g:Tqr1ڕxdM[>\1';;nHpka6E ePnky_@OLxD3[JJET1m6&m i wACC(뙵Dm Tl4ZO8*60l U4Ab"\43ԄE(IaMQ娀P]*MF^6\)>j-|! SE{I^&Ui8/ԡ? @Πodɴ%XJg+INzsNq3;׎ *7Rf ZY@F0+RiF(Aϙɣ.HqMU׵.y!mxYd&wGP >ݻnc5*x&B>ROd0QyZr>J)Vy_L%E9gg0s5a*AlXqŨU,.CLTD U\R>cӥ艦S?׬̈pl޾.Z5g-G6]^tA6ǧCtl]bzu9m{AuG6w$ͳYw-Kl9A}Fww>PE+-7x8wcF*^uey:`4j<FB4ß7:՚ȓ/yNc ڣI+\[nmB= {?8+ž4 3tں.1ƗӘ=oK?:K>Eb# LTR;KIpʡ)>Oq.ߓcVCO4%xIJaSMPpd 4ߔV; Tx㩎F&--Xi#1H;wgzyqǕ\f:dxj)ϘӪfg'Z X 9[cU_0ؿ9i1Rw߼F{c^gv5bkR+dXJ7*F]!]WWKֆFu5*dúC gWY,Ú79P2]Jpi?V.wfX'C|Θ$tlC1g4{ a]AdnxX`qka+_mk%oiJBsRXmK,ZB(0wʣؐ))@v'`A=3#5Ut#`LK8=HJ٠L©J%($Q4T#ٌݚj;P~Pxw[Tb>,^@"&ʭ!dйK*0Рs1l1ޕ xZjƱ.u^U3{˭bF9eԋ@u%20I3KD5wA>iIV_D] eC@K"X1*pEyg.D𜣨 ƨi]H2,+>&qJ 2Q{DnDA(%NbRYq{a&uW$˼"5eY?,~kq?8J~,zp͕PARVwF0{I2>vIbvK/, 8$ޡLi|]axMธAUF_7С *MEPFXgD:'JrXc*e<ľ)gՒD (UG'$:N(O%`7\|w y<^oCbE5e1[ԏݬZ .Tq՚ӃIZ!y7;*bh}qw +Pŕ1Z$]TO?Pbr:*0 |n;9-7˹]qEV⃓#-k):2l-AZ2 map,]|8_'Z&'(YKT\GuX%7jAw|d>Fb1LEVCm[vZC_([Ώ_ OJ-|xKc8GuQsM0TtFHC%}Ӈwݿ~CO]Tݣ>-;|.qc̣YQ 3p/GA4EIs [Q&|y ݟ5lTV!jGySjTԧh^6}L HB*ϒe D/l"E*r<%f⬵0W.ez}mcCpR9|Lltm9-M"I g7.xlD$GJM.A8)_Df}DQUelϣG|LHp>9DN:YHPOE}ԁm s͆P}@$&hq㠙&(AL l"*G9KMelaq=s8FUeyB9_+q$ύϝUbg_ESY5ay"W 9ٰL G"c*H,^?;CoH}hjbV ;M3߶ 7o VFȞ~a MX12Ԡ};+f!A\[+\њ8;Vۨ)k:0gVD!FR$(ݬXf0bc܍QCW47ΆjUo>7gt7x(ˊWG 1_b*V61gGg!mۯNoW%@$w%nC'e&i#N<uNU;TKLaB@T%=ǑE$^ȫCRuJr+ mŲG3H̴ƔԒ^{U34 ?(#?;~⒀Ą,J` A<8#@;|l R<-Z|z*SԩmT#uͫ*yGno]djGNPjr#;EۿQ|p5Sde^Yt`N2$:A77@Π`5g\\1=pO3p*j 0MTH9landi k m8[xQ 8Yru}xdy`HggVP5FA{fl'|͠E-jS|}:VCs2f6-xe4* 6}Ra}ы;_9M:jlgO'pưwT"8i IAg7lMFuyLxȂ /DpQ@PZ;2_H)u9lWN&׭ƩXn1[" 5W{έ5iUijvZU`:J )־1h,E2/ݥ*R7).菚_xp }zB;!R'1nO®/: O΄'qiքF vxP\ܟ+˞cA҃;XW됐Rj0^an06iMN-@Ms)p k+Dk1i EO?|C׻xNW>U25m?FsX"l%QՃ^[=ܬO^5wOsEPVr1ŇVzoj12?*z I'\̤o}M ׼dl~ȼsCBB&X+:܌[\t,4~[RQlqلkkg'y\Ivu ui8ΊW|;T.Tu 꽋lmFet֥RM0[vNC "6'ƥuљH:`&8 >ci9줯츝Je~ (z?X+.nP.9c~I ? S;[9S·3W« yx|m@5"p!-$#O{'wuֵ>|CqOӫO'_Wa :;uw^{׃X[zA֗\nVQ1%"6>$:z}1!ql!}B65,uՖ5rI0FӪ4Df<Lс}P3HȓEN4\..~^mJSBGi?&sn?p|Aى[(ѥo%'@|AgVi $N+Ρ`j0dj@}2Exdy- /|6 o0hdA .]ĩXc/4-rMƧ-y>]tlo>>znR<}p{vȊ9j Ľ7kxȡ(!*%F)G+99INC1<O^J,;IdEbKV.ꓚbqݲT1דc.|/^M}=qmhm7aEe}sRMW폶|Bz*QzrL _^&*n@nŷtǮF\#8}0|X811NDWUG 5~i\kB-]\xA_OZvJ.E7qt$ñ$qG<*^mҝْ' *tuNv2eQf[%in67 '> #wZG`|;=ᖔ}.j=^GrU+j;DpjhG*\KA0E#rI(kX'oMSGNV+Rhb47b*r!*X!8T8;qr씮ÛxF W؉[%v7~jҤc{^j83Ǥj^D>d\S)Z6Yq#ٜB W<pfƨ+GŶO׋%CYS) (oWEl(.(hA}V EN ؔoI|p[ EXl$68婨P=j;>ĒA@'BP# kJ5BTk v{Dzay}N1uḼ.TlZFHAaiA@DސPE ulB:( O޷Ų}V W :5%'هÃ9/}V+Q1W3X4!R}A<=OsG7'h""&r}6Y@i/FC MȮR6%gr787y^ <흉A={է5Zo8_/GxƧA-Tic8P*QT6o_v:xAǘt*ƦPR\A%iD'&RURXsXY@޵5u#Α4WL*y'5.cTH*TE#$6LFݍhHίYn+)Zf򅘥dvZYfy3ˋ{#1IIbF-phIS#)\TI>fjmJ%K6}O/{7vO6O;sqE1=FxGIqLfP'wkQJO}P`y  -jf+-saA#)r-lម gs;ue6ӗ}}n tp+at |BI 8FqTpD暛|"*tH__r)MKvw\㫛Ah >^aoˋG\Bls OzoCnm~kgcpqؑenv_!sNפg8$3!i${u;כeN[:—*JV ^1JP/1"РAkp\Dp .16&&cSݺ"-u`E) 1PK8W$ӥ8 ף>2/Б#>>;zVw@҄#sw,R  #Jgq9xԫEūs*GdP`9sXJ(?ts^h4w VV |[gsUn>ڳ l䬇 g8pFzzx4}4DoC*zQ}ىa&OtXј,9TҮ5YJV.G3-_ʹ?h2i7FN͸7pj hYco4{yLO܃Gdޱu VP̤@(_gÁ[o}ͷhZ3u{17vE|p>&{1ʻeo$cF#飁Y\P:R G'AiuOA3[\~{u^v d|s|VN Ӯ ACPy93ڏoޠNܫuw?{J"KCxw<5޹;x\ks Ι#5޹\ksw5޹B@*c8anTn-["n-[~PO1Ev+EvkPEvkZd"@ZR28x,3T=(* \QO2&4ȳlfX.pyhն၅FK"ֈ(! mJ|cIPKMKO:{zƔii4XY@G2v&^Ay:scXhopYrq1'n!)R'b` )'3T*bA;b9lsc+o4~p3Axg_.T$ȭ+RJ3Fj-OpqSpFD!Jy `[gAU(T܋83zYƣ&+{ɺ9=`ꨕK$r0 LԠ:Q ٵ:^E?'Wmi#g A%"O%ĒP3;w!rp:N퉇P0ǭp8LQJ Q=4h Ly$  {@@&ɞ+g泮N=#N{;Yf '@Iti+!'n',8^Yן.0H~9jx. e0!О¼zwBz7j$Lڳ;&'J V;Ir')80_-I*yDg#1 q Kqg|n}~>kPΔwgȆ5m8?mkDOjjz-mArj t:>M.kDQcm\'.q0jǭ4to.ͮ^x}3q (~Ӡq9{`ns7 {Y~ˇ;W4t斮a";YU8iCZO0?żϋ=m՟>9AIg<ꢓ]vUQV'y..HnX}4nLbozl6ݔۛ-/7^Ûď磫n[ݲ[[< Wʆ~~Vߗ ]ͺ.|'_HO?o_?߾G߿5.1qFD"Qį^8+6M;4-d|vvݿm0-К޵E;v6;Ct:GnU5e 9t͟/a<1 Fql#)a _E415{Rq \QJ!w!ZF6z*-d;*xO+BGRH!?q 5A)ՙ `RXfqT9*x XZ äblIhǣ zٻDH/4c4;^9lqEn_=w EJ!I &`fbܙDJ 94zE\E&}#iP~#[[&5*3YbShc"1My,JR69krLF1A}@0t>Fҥ=]~|LO(qF0>«$BM#Y\ .G:ʼn2,gP3 EҀͯḄ 'b @kdjNѾQQ>QŸ=!a)gqo_>Щbzg**):tpEzgoӶ,[)M|k[hP}ןd7Wwr\рs::jI Ҋ0˝P(M(:E-*#uH@Bd 6*,ۄFe,&vX4cO[-;?lRܷMF;볋牟c>`qПLqyN#()DU:(JX`T@. fKNА8>Ll`\Plָ<(d(MbDnd1t魎Rln4 +Pv1jVU]Cx KT3d&T&M94X!aɮICr@$igȐR,B["Eq9s9b]@^I<ꨬ-l~ XbQ-bGa)U8}HPTvr('B'#9+09[Ie%HőG! qyͣ*r++lCeġvh'|rkya.V 7JGt2fZQ` =ĄMgJ0xP.>]<\J;=s L<|LUFT8qFF;W_&QS^T:_Ov qCqY 1(a7aZC-q5$o'=qG Le>sـΞSCV.#7+*P:Q L$PR iLԬ"Y%w!EYKvf^OTfI `6/49Q`hNh },'4fi<Z5,ʧ?aP`eN mH㫘KD𩲲ZA82E #V b%qŨU,'p TD *hއPJ'}Jo~n2vn|vWn|Pm}gj{ȑ_짼\dHf$@.)8UVlSidkFQK-wPbSd ou1'nQ֝^ww/|@7mywEҲHzÝ=zh[..vvw1{1Ut'{F&ōjno-iSwK%)QW۴xDKw|AWeioˌ)t򷯿#PT]լ:;X'\q7c555W*Sյlr3bHP1ÞkuCE]t̚OUG0RI9ط,M; /qvW<ovTmMec<(z7YWZm$}}4>䳉@Sol MK`n3IWWY;Zm1,?w˓ϧT[J@ T[fޤ2Q9æoL[sө-Ayj 5gŒ.J!eJdPN<\_K&k 6Q9#N[|,.z<"0tc ''}7qHo&:KA`Y_NkٚzڢŪ*RS)hH2xQRhwСY[̦e7\' 'r;a5Wi,x p-"3RXWKːS $k`t傪$!A`lK4naa0 *ze8%"rՕbs,FozΆWV`x/aK{(1d+1uJmu(Y%&c*Z i_^z>'*5TLP*WdEO$j(xI^CR:$&φf߆<^ (D[)b(l*# ,UJIUYfYO pˡ$ M'LGNf{%$61[>Gy-=ijL-$kk5E+Z@'Udm툈lJתD'3y_*24<&Dl)Go f!GWt*5YΆd î2<326&iȡsS oq-)m ME #?ؓh'뀄)H&bj\uo P(y]uSHΘB2zqc6;oc5A& ٢~y~j5A(V5"r(>Dv3OM=J}EgiUbϖG -QStj=̀TU5`t1*G7|c2䀬ա-A665>a .h [1[k*7q#,e8,bì8.ә'([gVHTٷmC[B6*tkP׹Hq**uOڎm zUHŒgkCɡRֲ@zzЀ%vPn< X(&}6-"$SCA'd Ԕ$PQajRQO66 *d*{ݶ/=ѷmAvV0&U6οQg*^w>V!¾c)i.6M)(=K)kׯهgb4NOU`lrŽҠqSV eB8G/>YCG5]~" s$QM&$q$l &aT $JN՞/UL.e`u-]4],PݴFOnYmZyw%V0T3[oql#{K*!G-d,(= .`SIM XTAv̹Ӹ\znwE 6ޗMDz!?\aUT:D |7*]| g۞O{ ~hcw=Zڽݧ{潺߭K<ȻtITE{;2JێVuDޯ:jѰϷ@1r be?[5o[V?_֯S{mڥ֥罺<:B[L: Y}1kTA ׳s{??b£CC7,{CӶBO/Ox/1OC]:5^:~~G?=m[[#]\\HhD۲M/Ux1s!}Y#Ccޥ>y`gZ&Ъ3O[͎u-/ha 3i3U-|T7_tmǃ֎ꁋ.;nlM>9}CqVonrӓ}!Mi-R,Ƥ\J3E[dwdMl*[l)C&YT^2/~C 8m J`^-F%g(:(?)8j$]Agl&z Hooq`% i}TB|JRyjĹsZ‡4GJ@JYRWۊY8/Elf}tW=ܥb@l;v`qK=ӝdn1qvYB_Z朲uTD(:8o%}bb,\L n,:wwiy];EĚլ-TBv%QQ'ն d]_+x1 ^MMXG}o|>}Wwwʰ[5EN9E) #UqR?4( DJQcaWQD`hઉ ~,pդ%:\5)-Np-Y8،3&#*w7 WG} A?*3iy&4tΔ`;X } X-|}zZls8)cb3(egA&,>ʧb~}nk%}2tD ˳ӓEpqI'g%o+lFijl<56?.nv:ZY+t?X]PCrV.9 I&I~Ic73k"UuƘakBWB dƴvf4TX/3$*ܤ5hlRD"fb!O#+XLj4p5z,pդo46)&zpeyGW29i"4&[+' a4p%Z5&ʎ?N)M•gW"0 \5q \FUWxϩw=|dzp(iI|+~\WNĈJ[FWM\W"~ħUrhj+L#+D\a,pդE5tjRnLpv8*4"_l/j \5i= DJWooqDphઉkx,pդ=֍Iɓ36yCAwcJdwhv~녩Mpy/ɔ/GRU:G&$ߝO|SfH?~|&&}s'cs6Yλ5#F# E >ny7ittW4] 1!D[ܓf7L|,c7ɪ?{עƑe,0X`@uI &= cèWGSBRrA}m>DIĖ(u6LKͮSSު[5HBpN hY(* zhmiMe2ՋhEtD)STRfs20}}]ZX Q ҕpdDWXl  ]i+I!J::A2Vk145b\/m +}`>!l Y6SF=m[kЕUO -PWƷ\ 6SF-+Fd$#bfCWt(S0XFt5ɆQeMrCٶY:zl+]!`ͳ+h6+D[+5Um66CroTI F*FMŰBhiZ2ijKȞ"[}iy4SKxO=b b $l$!•6IhMoMut$f\ ]!\Fs+DԦqPʎN ӄ4Άf"[$wteqg#]`r.s+DܧI< /́UoЙ,׍U=ic\AHCTT=e[te:zlS,7P ]!ڦ|WPJ >iFt •4BZ nʈb٨+DkY 0eGW'HWBhfr+e> l|WP ;:E2lTҺY;z(xK-خ"Ϸd-U7|%;V;5[];m-5#2' G"\frV~Qj ȈЕsU.thm;]!Jݭ09EE i>s \ vfk밎N\~ j++D.th{K:Eǣ+}l l+L%՗CW}0S l|gzpuþZhk^%o [lGWz,#BfCWW\*C:J::Abj3+Yw<BVh[yGW/BW\kr24BMjUҴ] ] C,DdCWfzcQrIҕQGP!}kTwtLZ~Ev{`MA1*Y(Ur. "#H B62) piӛ\Cxۥ0䦓'(1\LΆb_u(HWRsr4`%q4"\ ]!Z-NWtS+ca9\ JvB3O52+c+B"U'W ?]})tEaUOIEzM5f7'V5H&J.uEt|T=#Q]!gkµt(942+lȆϚ([+]`1Ut(ug $]#NlYTieIؾx,Ɉ0W$B\h>~Oi@hiZ2[~}vQ]J>Ij ѺS.@V'UA Eo[-BuPU㻼(lC6QnppNݟ;vr9/qH >_s7Ho4gPu>X>) ƪyj~ne*w^ƣSH}7S1J8^W鼯r3Cw5D23^]|Q ڥꟛlH`4Z.'U_5a_;԰}l) _WԮֵi9;o4:ௗ3Z40a橈j0 ŕ^cR qc( Dx:bM0)xk"&D(VzcbO7p67|`/&p)^_KSh2Cwz.O.㝕,m^az9՘xt&<#'5"{淡+t9`/v%MV#m'lj]]/5-7¾'W&nqy%`nn*MH.u~pJ/¯Rd,r].r6]oףj^u1hx}U[OUrR~`Kӛ;Kq[ Ǩ>%6~Uxw]`w_+lt_:ZPM/_Q2-]٨~ 4ܾ;~U9K+{!UN'^.wWB,]oLzPПJjՈ?Q,0OY e4Wczw9{VGM 1@דTN> 3I6,a*Rglâ6!X5'%j>D_~\-̓֋`lh3NYn|SGͨ#Eb{7H:d'N$}kŖm)MtGÎn5X`YHi=Ci'TP2 "KD+Z]d)\*ff>&X&!Q~ E mXG 'K9.)5)pZ)CB8K6O%KZy;8ABw[` wn|yp@Z-?( ōm1LWŶm0wiCQr#Z"J!dWbaQFyL!2IiKd#DdR`FYp&**K.m,ѴM&얦/2iæ]dˬwYx`OxA^ p2Q!6ʝ%7R8#܌4}9!0Ը5oVj-9UbZKjNc\ap4TCH:BR\bB{`+@Y%Mҡy46/E *:d&K(miAMI d$Դ΋78$ چ]ɁdLzTVcsh* 稵:ruvDH慓xC u`'@M6ItМ$ 8)&?Ko:5]< 鿇Eς1.*!pEo{] /; 8!6d{;s51V~|jr,>e.?/G[ d3y:ѧ tFAֿ _Z܏? ~buoQ$9 SI%b:mN\1)6pg(QZi[P,z)M]Lq$cܻpTOT`q~qc1sNz \nL_S5dm~q 4p-0oLyZ=[>5rQ]ulu0 Q6/g˲l#trP)q|%ٗ{SrwJa{DMfNdZ "iG{2')ٛ*#TUq7}|ΪJ/7-{HڧQ륂]!굜p۫Iz4_Fy"+?0C؈'_AMg盍E=YURh 3$믿?߼~7o>)~x? 1zYR ,U}`__P#i{!u܈V^!5{vvu^n8' ٯjwfPk1߰/\nFj#qr6Tij0`Y+R.2D+ 9A(p Lu6K;RT6:Z݈rV1hGM]%(FVUyYݜQtiqyіF&ڶ# =vf:Ӹg4lG)峭оQ)W#,I|V.),K\3hv=.\Q46EQH Ww(w޾MVP~[&D7Too1{|PUzhb?ÊQ%h&0f #[Ѕ]x3[ůYԩ+ɃLC~h;CX1zԩw:4~dm8f #kAeOǧIG@`S9:JȠvXXO*68! Q V픈5vt8Ab8I 0(05?Ζu ~ivu,SԩmT#u2yGn{o7m.6XV@[wHO_7c_ןG/bdQ\rIV&e>Κħ[:H)Cd N g`cD5gj=zDF.X1%&զ2HoΦ*Awd6#c?-v#cQ}c#XxX,QMC;vW;ۏ`HW_.7VQ@VbPR 'RNLFQҍk .m8Vj14 @i|R+Fa`#kJ[xWE1YDZQ:Gm`Y8EW煪UZ=EVJ՚4تtɁjk^JJw["V+ܘ5m@U-d-{:% &-&9"v ӟX^u85nVrl\㢙r*:B"mU:F+A/B*VBD@V"jaνx:#7OnkusAG SWilǗ~ +'+ zѬ[B {q>uH}-hTr0*g_TqJ}|Bl\6` bG Il#`I'S%t6mÓ5jWM\9NJ*ƔwdmƇXqԵ01Y;e8Ip>/}t[bܧ9_dOCуc!"?q%h:\tx#(Ӫ*P)85*Hhт vMx0ROxQw>-lp=>z>~˕w;U˂uj@h׭8 C0n.izX߽<#޶yD< -7[wƏwޢ/y~{]]osu{>ʉ=ם?1%y%f]MtJ[winnb;n-m$0ËV,_ho:=@|#'7Ȗ»?[&?Q>kMGq.Ĵק |<S;x3' hFkÉǫŒ=imrd^YEϖ.bK:ÊӚֲFu2ekylywU\C},ɵ[{&26}}8qXV:rwzY.A/?l>nVgfҙ/sl|&xfv;V@íٿ_*KKv=f_W K_~hJ(U{5juVn5Ӭ)nrj j j cMY |TfRu 꽋dmZ8"s.dKm4Ě n:1.a֬3iWbW8J1MQ. }V{ 5u\f';nU[ o=5يZS1?'bbX`ʇ36վrkZyH<)xom>A6,s\^D!S{&t= f@٣؉Qc9AmO0F0lF [YuBoBgRo I2X,$ S5i'4Dt9a@H*v݆s,K]uD%?!2UayPd6TG"sCq@V lxs +blRuj`LLNxkQd% ӤE]u5ר 8j~1-Q\~[β񖉻w Z[۝ RhX >~L+~H@6U˪ÜMq36mgzq6c( )cX)+=iV gWLH1Q~b뼀Ѡk_Bcj.QcfQ%5)mh+J]݆s@in#2e~9S \hr֯6AYٕؓA.V6.ˇ;jmG0GUUH% 5N0xbX1咰(22'P9*MHvnYx{<|VǜQlWqY@]J[uWT_S1՜5[շeR5DFBVR8Vd}fR4p_liW\~_:H>6e+qҥ(*j*Y)bnkq:N.~y+Tu >U-/N ”@]-r%ҩh]VY\ē^[L ,1 D%Ŭ4 Vq󍠌LRd^i)pt s^\e78JvBZ??Ӳ1d!hY7iPɘ5ځ.nf<`yQe"cs܃L6%P-Q&NYB􀾖 uZ'A2yr~q FBZEЛyXK]AD &C^Ad',DKE$$L&/lλ{Ӌ.rSJ pZ}lFȳ N$APJOp {8 {8ޖT ; s. +!]bNgm8h"S.׭;W!K 'DE[ȭxjҥIIEȃJUU%GIל)&Uq}Ɣ8NZYұ؜0a6Ʊ'L!JQQoMn9 !)pV4 Vhf(ct7!m[E( mӏ- ؔ/߲P4.sXtV4ֶ}8N`TfRjJ5SL=@~4a;nRLIUvŝ Z7WVYhBF)(O>6"qi'Ɗ޶}ק[ۅߞŠi9eQ`:dH*dZWM_>vXV㨇2dqK=2UM$E&IJUJ DI`dU*ZF#xՒ ]N}W)㫚rV1zvz0V]АnVmpuJN'aƧC){ Pr!4&RS#1!T2ՌcL: 90ŚYr!8 5I;pq*Rwkֿ&A}sn^11bmO%Fy7rf7y0" 2i^(&ab'bR*b{GVCfOr_Ç~ؕAܿN8=Cϫ|.QQ߮<.)_6*"6-kQ| ڷJs_]m΅vLs6 N ,EP^iPѠ.堬IBYS<(zÑ䕛-TF>@;Q0 HO, h?c_fq}2q58T4PV7ŢzKY{=0Xu@,Aٕ:|;M-ܫ}oi幟AL]\J>Ġ }xƢYIӾB!ξ)S IiLm΍\zsŦ?_X? >_想V1ւMb%sڔ@+Ӥtly|t42g렢Fjv0y& L][Vdk|TH]fSx8(W[/sA">m;n֪X;F$W6 g1aPƌ!ϺX-!_YC:Tu1y({裻/[hg=CC @i؞d'#7^7z|wq4|$/d m >m$q~}[RqteƌyHV:E9u`ZL+/ѲX-y$.Iqk?LUi4GkBӖBW>4;7~:ܬ[B߬bMYKߥBR~f:<݉Zalʾ 9'V;6?ڬiqwEh貖~/x4zwsV?ϫlQa:-Zۦxk*~[ڦ޺6k Lo'=OY+EWK/Ȅ.]}>TwY+쯽kr=\3TWە{ Z*2ح|+A_/-0ReY,W6iFl/+J7<{҆6ܼ[?\NSߛ҃~K7CRVٺ/"o^C{$vt|rr Kk;4Q `BO̲pOn_SYmዿT՛O, >0u"aDtKcNjʼ˵=֥Z?aZ?y_n0i÷9!h~J<[Mrv;~C@׼?9恶zz3ߜXoLW mQhգP ݢ+z]ў:b]z;$(BW@Kv: {zt [͓Wh*tZu(++~Ԅv4p ]Wv骣WHW"C]B:]uO:]uz?w*{WS˙*sڔ˕?n.WAW-<\W:\Lggт< 3z"!'hc|>?Yk?חcŊHQ{vٍio.[ :?1n5Uc⟽57+Bo_sN}> \(h^"xkˋY-FoqrT/gh7Ie1a|Ѱwf_y+4X{3VZ {KmQP!sHg g~?c7eӪxi^I\ dR7mtʢl䭥)M*v'#:\":Zv]Sj/^2\Ʀϧ%.V0訷L SW./]=8=1քͽ;`9C~43e]e Bٸ%S3 f0Zڲ% V!Ec}JksY\ֆysuTMM.1ڜ1S$G*mѱZ\ V3πP4n "NXY#|a_#G|UBZ)R@*,R`!!K`Mji(A "Kp*h-W熏:8QCJTʃl]DF{ً v }ƬI@pbh1FPTP]/|yq^WQ"Tmzx324 ڦւ'a5BW / rbTYd$9Hmi>2-Um똂[;KF:.) >@HI&bZD^eHVLta8Ѹ<{*EK:Bzsx , >ܨJrB#kUgkxf[MyY#QKCaw??;9:m#N檢q>e-`I'x_b=^#EXF$S0m v9H"P)Eկ)7 Dpnik ) zc 2{&U%QM! [9l\j%?zX3,vMC( Ԁ,.6R)EVT鹿,q iotBŁ$`*#>ogQB 0dJ-66Ѧan=|[~<8h\]Kε=3d_ nAf4GK`3Fs fcJ`GBAQdf1h֘u_k>n9"9բFCo bP"z|x4L}Ө2#C!p)C^"`$C*ȇ2tyDࡵ v׬&"hKR*J#A\5$\Au׻ +/V1\KL}Be YR<'#!Ksa:H1-й 槀GsF%`r hNvjhcv9saW u&F=PI']IuECLgZ{ȱ_)af7ec`v;$_z4X,V"yTO#}--DzډK𒼇@$tq1~U|sakF8/pԀoB>55ܕZt57ePXVae%kXZ 0_4zAO+)a:5>xӵZ5,=88R TV3?x   + 79mvgC(Fuۥ#D`З]"̎@ISDH+ 8DK,I*F @YFX넫](<]C`yjg,0B3[]2aP/ ~1n&Xj2Cذ:Ĭ ^*Tmb)ZU3 5j/aq͛ ¦$Y )\mguVS4˗GK@cDb"/6K9x6]/Ơx9ig`t6;k''\c|6asZ4.t\ g`QىKP^ufE ]Ydza)4R(U.Jf3R!`mhfs_UQ AJCTI )H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@mNJ td>oE*w!Z/GƓ@KH DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:\%V@X|@l@ֺWJgH tJ \sHR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)W d2'%F|@l@ր{J @))Q ^Ҥ"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rzh=to[8z3PS,u^փ@cJu;/|<{$\#\B>rK13A¥C.}Im2+t͆{DF] ]9\9Hφf{]J=wCWM/c==]m؈':tZTXlrc [Е$oΕveCWWeCW}+DJψU:B;]!J㉮$s  ]!\s+DݾJm$"#BgCWװ\ Zt( aҕgB:Zlo ]}W.Q- }vPׅK8l7Q U0=rd +޼Y1Ik4N]r^rVp9Prݱo en{[s%Ɉ&FBӈ֘}iDingQˬk 㺚|Cq} 5ip~*&c訐*}q] c{xOSPPJw?- MF0'/xsڋEa~VS(~<1%kqD 4ϊ ~C&{w:$vқ?>?ȏWeQk}J)eUMCݔxlh(`~?n\{q gшhޯZ!Zz(]YG!+L.thޯ`DWHWNs:.e.th?HQ:Ctut,c\'꽏:AkJt;];6:KVPEt`vSnhj;vj RDWmz.:#l r ]!^i?":Dɕ̈V,V\ :'4++)am%2+7CWWX ]ZTmR]])%gCWZ ])%9+@':DW/TLuM;Z+rLٍn|] 1ͦؖuƣ@Q ׶QZ|J&*~t`k">IGL]z2ݰT;Do)WkU샚lj]H 1 e)UdtFElr`#e4J[uۋ( ؕq ]\T.thttrteeDWljŘ̅0NWR]"]9mhϷ, l{ QFtut9-Gnp9E^;]J%wCWM{`z1\+x}+n ߷-J]ݷ鹑ֳ ;%+y.thwB] ] c<3ѕ0; ]\at{OWRYFtut%gDW]!\'s+Dӕ1Ctut9]`-Y6tpU6ADTm C+ύ>)QBRsca^lTGݽܨZ lUx.owt咚^Вͻ;jYe,9zg Zi+ 9l\aD~QEZUFt͆c6BO%AcC+`.#B:c\kY.th=w]"]9 |cNW]"]yŘ_&مVy8.´n vEÓIݞ|}eoc3o}RT*4eZ4N:6],.b`!f儽O؝Hl?(?{M_n7heoLL`5 WB49|9F;Q\'#=p欌_!U 5?k^qXǿO&gJN|&f]G@ԳsaQc?f?4Pnێ?N.dkas]?tnwv~V<\5F_ቤ_Fy6&]`x0o߃~I\pk2*霏|u%@8)Na||>=[|tu\լϿYzzm]O"BK.hOiq>~4 ,&NF4}46.n3 cSEt.5uw;:i-W>h}U鰜 NNx3[XotVƛJ"T\hn* `2-q??%w suhSYI<+/1+alOqQ%]T| 䮪Shdma.H] a*>1.}N-^4B,|MJ=iփD8'lѓ]}|<w_gd 2G~1Lyw%8K;9v4REl7Uهl~[s,>m>oLxtf9.?`!.!OaԞ\ ꂮb 1.ZnC_gR!A}#huJ G/jtNeLL@5A#< #`_>'b8b^_) +_]I`QmS MMV5 +Y)iTL;%nU# ,3M ayżWNxkk080:w*Q VŎagpM 㪮nfO>~{6 Zv387#O>[ 5P|;M:94 \n363Q\][VYLy(u5S2h.\$ LӚg#|k!XEIkc5k]9 KU%E̳Aw==IF8yb~ԛjgY"+B4 M]1'C Ѿ4w7W{4`Zlw?v83-P[9E^ oX[ܑ 'n>(If9DŽn|\F2_#*6(lEJJY%S4ޝX8,kEQ-b6ӹ0: f#7ގ׃U8ўMr.]׺Nu+p>ѨMZl EgVc?Zϩ9#o!]F}.ҸK?0۳W]ChSay;Ἷ[zռy}ۻ7NoKwh"^&V?hH2Wz i{C,hwSsyZ}{dw~MvzF&&=XI2Z= `+̊PטYv1?\jq3]k=mSMw zn=re;FHX}zD\[ + *i ڢ [\δϯ""=)FOw gƈihO1/MG%裇o6z:ig)X7e;K!ip>2&,-ӂwz-Zo[d6y.:txѳͫ>#CaY e=RA}ow M}[DH;&LB{.oh3Llwv鲶G}59'y":L:bFWcdwM;Jcc2gSJ^D+UkڏɶzgC!x_\^s㎹0ǜ/ZYó[+aЖ [+oS`,&ɢrac@{YfvLr*AY3?XT@ͯIƍ4+.h%J:m 7 dJ^s;|M[޽Ⱥ~[<q*6FX~0hUUP) mf%* "-ho-W?{Gnʼyh"Y4g }ns d-ȫ쵁Sь˨imtO]b](y[C.X;̥1I2 lwX?"w<28W,IUj`LL$Q'$lH` ]ә սeSAoSo-#wwET%E rRl- Y.#*ob\mr;>օg ;@%WMG0–ŕ.t0i=lCs0Y3FsY띩djsu#QHcJYY)g]ɘ"VHdpUҵ_Bc QslŒM0,˘SV-gtm 7ְy_ekVDK'6ey{˅+_E m T hMF'A_K6uZ'侍j uhGovNY3%U~!9*r=n$Z2 a-6?06&Fjy]WC-Cw|SH XAFӡg2&) r–BRCRԐGn;$e!)CIŲ1AY9rIK: ~k}8U@Zl0ٗhC̲Ҫ3r A8K,6q2s~6^W ;%*B!IcN7& GIל)&U[b0_Ns A'f-Ā,yk T: fk{bb)D])*9]_ӹgu2jp4%ΊVyb!(f65yp9k-PQk%/`S *mBRb)_X%im0p6:CK*̤`)hRu@^8v k\0ٷi\4djyEb$"B QɚTKj*(`&dRcұc.-tu,{oj[A;pK@"{w'v,~^Mﳯ{pF8.j;*]f;obiϡEy(*m#[!V6[̮l9gr"]< cODl'B.]PNOܫLeb>3e$|^V/ZBk#cKbR=Ơ8;4.{Ռ:xAǘtb 90ŚY`C0RDjQ}DZ65F 褮#c/V~hfyW5b,o#lE]@//, #NZա׵Z@1g#|"&T]mBktnoV~}bLgye3=h+MzDuXZ2hPIO8|h!1g ^p*V(-h+mThK9(I(9Sɨ9y%y

' R AQ.ǮgXGfs>}^M@x9Ay?pxbEb]u |a}8msꬵỘvLT ~] 1^뜒EԺ(ȄQEV:j30S1W(@)Ijm ֈ+@{zǃC] %uc;"-!57gzЄ5[n–3=ln9f34gcs_=Xc^쌠;SQ0bMe]1錎|+7nm~O(JȆs!䤒R&TCp*b 9f1gDZ":.kWc%?T:Sj>DVYJ@k*7e3) yBS[eD]bH @mIJi6͘EX%$BB,} t ca*K}E2EG J"&djȏ;nTPǫ.Jؒ}K`tMqrV=jD~[wMDD.lwo2Z9ghg}#hElJ\sbxR x]LeeqڌQڸPd#Wuy_%P!'dQ)4_ 9s #-]Us#p-ODQ܌NNgΎG͵˴;z4@A )Rp6XE] h>EFnI#u!KemBdrZU F^],# V˶k^˥+S1:qB*QUֶ8*6EvLI(G1rB&9H0̤U:>͹پ2&VFMm6OjHP1P kNk<pcS^47+}_^[vPœnGCFc\hƪ6M{2-m '4֫QFDJL Zs'Y=*RI|3TֹPZEM',IsdL(*v2%s0>$ws|qV7:k&&ϩL'[YYm~vWlw+Abv?a9/ۆy3퟿NҔM~ə5=w-ٝ֒癐GO-V7dZh"2he|V;VN0|ۙx;;?z DjY*BQZoKgM_k~yi }{v?4_Dg__]~f-Nȝ օd,ry뷆;A֎S{b>9jf \'ӛkM~8ӓ]f'c4|؟փ{gkLDV֍vHnp0f0N\_EiGI`{͊M-|[nivTnuyF]{ưuԑ6qi>Nڲwh'`/:e/k/_gXekfOiB˯L.grvs}Vr^$ߋ[ܟ.O=Q?"wo?O~Eûyןߊ.mSh >z z. Z[m1}ϸqvuhЭ-4_Bмo:7^p_OmAŭMI?)GHފTKfEVT6p!Q-\&ɚ#=ats>D|wG,3Skym\=|ٔҺfx`eOg'՛L -kMwv^GMZ_Ucvv .?=Ձe;T;pekR2A~bYb1MlibbQy R@bR21F]OהΐnP2@w! \ijMԑBbY]2C55N z0Ial6r3]cYjwue:ͫ-_?𹳻fw|D6V)Ӏ{ 92e;cH+1-KPAwmI_!.6m%9vs%eOI1MH*gE"iO;pX=SW2y I휎q@?˗q0>o^H< @հbTF 05[]Cֶǝnq[$K}O"Q*[t4RXldD3: 0jH:!QmxBJƅ.RYJ_ hSBV2pF&MR@Q"y#"BBbPdLJMf^~U!ؙXOdTj~N>:Bp ]m?Ŗ> i1QIɜ OpRLm& : !_)XP:w>":eT2"0 Vd<AYH\"-(Oqcl)Lu YLy)P )EL$3DepVz=]8[Fm#Ӊ}G̯t>aR!{.2ܤELRRo%GGe0iBwIx#EuDՖ,pdFgʧ!qh.0-5/ךPLd-qrTDZ_\SӱNUӓCYIЩw7'OM\{o[;i旚߿#w*p̢dfv `sm kJI(7 胁bZ$JDZ8B>N &W _G\:&_g5-2.;\he\S&FW8#%͔' \FQ@\Kt@i@!ph,Zڱ+xF U|^OIЬ[k>mul(h-~23~,_A|xW6@jo{gpHa?_pmQ.y}=xQ\㡿ѧV?ep;J3>v?i VY  ΋F2EJV~S@nYCVęǃWwv}3 M+2cfVP-:nJ`}j!13?4@;ً8ߙ܎|/zo޶{%9ޯ6__P{%SbR?{W/Až׬>xUH`zf۫@=.R{*'=Sڑ6ReTSN`̋<ٲeZK9. J}:KƁ1jq#qCw1>䀘slٿPR{KVs<ˁ^xmҁ3q:৔#FrGs2ǐK{)g w*q-:- 8NstdddR2-FbRj=7pj-7p6OD_ڝm3"r5{3n7xf~qߌRߖ]ws~{s9r1SJY# =%.4z*H>.~"^R<KWnևp~[,Ƽo42=7qaVp Ph$3E dxabAhй\9:UT#SŏN;US!Ye^?9sٔC*>gmd3tƳ|d2L*3υd)[D%rZ<41K6xD[:b+Z϶ 4ugkh*YZ_ NqS3.=8".+3jNRj˴>Bi4W(ࢃrMpD ]BUcfʤ1j7Kla/#Qy!?!?7NK#>MiC2{ `.%i:"LNQZtBueڼ ~5/*y;AFCzAZˆciֵ{Z&f^#=@ -t/ DoN!H i|n--6-iF[}L0I 2@-[,5ɜhƇ4]yL]㰏*wt=R5)!(1e6}(X(e"nN:;-mlJLR%3fA PK!døſEK ,nN]㋦eaߜjߌor=~$ÜFΩW|vן߶ё߮?N&Z[KL19LpH4"WI͙?>Sۜ 'vW=E6F 'cyqGnyiYbb* (e+ zoCnp{?fݫoσ`M"gO$4- Uf0o'OxW9Uk05WE=RsimsYJ\u%zmJKޜjȢS}jb.:SY.)Қ I 9, o#ĔJX ED2%F;'Zq_*ljo}ln '+珅M2b Q=M^{p&c8g2)gjƕڲ-P`e5q2G#[vcx/vHr..5gYt:ɐ(]]U6hdK+L:  ?T&!& C4o>-w&_'ovK56Fx,`xY^U desQu +*&8i m>}VkWMAVܟtVvg7)|KyOõ{6Cqmrt9cK`[i< DIDud1d $`gt7Nă6kr7ag*)5ԓ Y'FO$9$3.^9)Jjlф ͱs-5n]eY9:\d(}O\+.;ĎS/ 4K7x^4=MʥGQ@?4J깤1IbnϘѨGɜ OrԊqfJB"w #z&蚌#j2h)L!APȵI1nиC7qd6P}Sr))fS18-Hgՙ˥,CRikUg]}LG?$/O>k!{.2ܤELRg{ȑWim|_\ff3`07|ƚؒW-;qݯحleIci*XrF$A)B9A F1 nQ@aH&A"(-B(t"8];"gVtn0Y͡u';-.vv ':F:[<9tvg[zǨ<߁iB;o7j'q?v4L;CQK!<(Di3/8S(J!$H S'\\d 6XH-lB%յf쌜횱;ҙ.3vԅt 9u^uᜢnד75$42BQ%FeDPp`i55Qxi(ʣN56<NC6l` G'!(vJÉ6`d1t3rkl7%PvgܱfkmknxAz&8RD53IFMlBgH# 5Ow{q* w -dDЊ(\kB$^4cڕģQٮk>Gpp(35bFTPdvh  {  L&v*oRFSB qy QFI@qsƎ5bg.Yĥvr*\KvՋб^^F(}!Șzj!H",j@h%&T)zx0sWܱ>QanIȭgq ~w٦Z0:TAp}kS(Yx׎2K%N0KTB%50ʣ0}fYJ9h.2KyƔ`Z-{9IWe"NŔU*4/NEA5EZRsmgzd.ز,+=+tGd "^@"&ʭ!9C2ՔT` 0Р]jn8?r\ݭm{vnn%GW22OTNi 69ٜVHEA,hx=TDV!csǥOҳ^XlN4y,"PrDI9 <_ZJKj$q;@Ts=Ҏꃗ 9DN:YHPOeNԁm s݆P}@$&h5A3CMyDU s,PY03r$ԉaz==3:~w]C1ɼa+lk`fÞ318J 94zE~ o';EoH$MzeXIWPJ&5oȾڨ0ɞlq9TbdBEAwWtC!8PVyzSꀮ[tf=-x8;Vۨ)k:0gVD!FR$(ݭDžsa=76pI6%$ <#hdR4ͥY~1%k91"jrEC#h1l.Tc#?.Og|ٺwG1bjN$IUs\u8 gP‚7dZ,q$w'=q`zsNq#;[l)39eI PKUuHS4=WBTbWey9ٞodC) H/=Q`}A$sɳCyd0QyZr>J)Vy)"Ü޳vxve 5a\ (M"e$*yp)mfD}Fxw'X%M T\ ;P%7TqIVG}{F}"k*!'7rBŻ$ܞGtc+^UjgT%7>&;ΛӝuW6S4@ Α2GŖ*GD[_(مЋ _Eh̸Qi 6^oì( \'e].z|(]\ߵiq™4_CLɚ~TMڿ } 젾8X1QQ`zPe))Gr \#D6:/.bd3_qBʰ(Us[ƒA4q s>nV\Beݳn.HZ5BѭK͚]^?j|[Kl]n]67w9t:o%mͺClYbYwizhi{r3?G ͞nɷxj;-7xsnKs֜}/GđM|1]O^V?h|= {cϲ1iBڼ-ŧJS&*I Lץ^`8лr[.s.k0N)m'Zw5MlRMPq}d 4_$V; Tx㩎F&--Xiry y>EWl?tC:k1 ;}L>~=LEh x*: -OW]Ѭ!l圔Z҄Ws\T=lIqIO9-IT]spWjXc8Z˱>߼oO2h2޾.jD"E#V ]^~y& n ,k|0kݍ)/8}.1/^uTh#ah2)yH9&+m[' <)pjȊMD=p'ɺiU޶ٰ6#Z7BuRm>ecM''֥Cgjh*Lj5\4WMi&N-0%8*h9 y/=ӒwB6 8ɤMF*iS%JT Y#4^>wo)wL~%H?cAǢ4.!JA*(R9i2jO\tz,7_`Q)efq>ps57Pg6^~Inޛ}mC3f/3t],7,0-?IK!`'=OQFEYP^#E#d^u}{5%-0 2b( m!&;>)c;I;t`9nKz]!g+_ݭbv*DrPA&-PD+(rMk mb`a(zY!)L^txFYUO5Ϸg 1>@"H)rɋXBČEN 4 Rcg[4o)ٷyMJmMG<%JȾnn:iqɆ8! Ik a4yǰ6mjWӃڕ5dvM}ԛj&`%Nڤ/B']5Y͏Fiň]bCr鏏;][kPi9n,-8gv@-ɻc+Սօ˝cuDl61i&w4@S-rlU+5^fѓZP}a~8w.fܿ]#CZT[EUcLn.Z\L[ \64tY3D~Ώgy^]˴g"Jzm1ؚZə~됸蔒eAy͸'-^PC_5psY_QX!)#,:q ިLQR2͌FR;kPCPMtvs m4wu|7&dKGJ7`m 4 P‚I]y8kM/܊ SpRBӧ17Qr9ȕЪe[Y&IG\;M].qrow.m˥)y/,K3Z9P_24H%@>3V(/|P"Qi|,.(-_a-fUt_r9׎x煄l5U$ݒI%Bu2.%s dT0\&xXB9]g]L]нv=*7͟ BDu!V uڤHdQLR%+'3"A61-6JmQiv~2[ɈXkrDPNZDe1ge$&--  D{ZBTN*WM'-!1(|RxT?e9E&[kL_oC}0Gq6Z^(l6C}H2/n7UCX$V Z1 BTRa @r#XQs39[)c`k'ma娆Vp. L$qzo@UISO434[DR1\ F@&&Ass$;g 3 ˂Y%\2 &Q'N luo`-㏞TL$ ki%-:؄cI'f0@2iDla `b }qvK%p$KT2$RJiLd`%Eh-P( uVcV$N;<88 YmXp1X HBkC2^ю8!V6&{{Kp6۴I`ZY'72g$sx ZIjk#fT0ѓ,Dmo1*c[Nc))1gћDCA% .SwˣRƪ0ܖј:Xët65"}qCHU?0/ yRjECQ ]ɉMmpiԚ( &AxgˤbbjP|ԖhIMtM2 ҭجŕ" ~/, Fed)b~ x ,/\lBk V$4Txz즣Rb$*$⤆}_(3_LKħH 3:;-Ke)lK<څruU F5< Zc>bi#B0D'̸48s9Ń, "&:JPk\mvg_rmҶ3sqw aZ䫮hXM9C@+h+!b|RR$UIjq˥w\mӶj$):ZY *qʒa/s\3A%3Gf[< j6N+߷? TEf4 P;)"g`CYރ2_VKWs® ?bH(\m x'Ws#{j.om~v 4ɫby[p=knXݸzoOY{ƯwfeˉZԷ3x8tޜg0^s._@{G *շBϋzΘ Zb c@sthm+8Უ3+%iWR ]!\K+Rt%"wtuteԯY5rp- ; Gk{)& QG*0 yћ|ק=/^fʿQ*hNՅZ}Yb$V*ÜL 'dd7ǝ{ r5+m&WYUYs%] "9-HFUjV*hm* (׭N>UXqb-DBWш(% JsNdIt,.%8-m+D;pɹ.BWt( r-uI_`]!\Q 1x8]!J}ЕrK"7t`eO$\X[}" -Ӯte:w`V Y ]!ܯjkpt( ))BRBWVԝvutŅєDWX]!r+@{ QrҕZ[] I,DwCkU PR;:G) úbZ֯63dq3`'AZ&1Z^RAH[}ԇg`׳URuaK~x4aYBD J9 #z$r%VI`^'s,+jY>dYT$Ec Rb@Q4W>hGrEuAj/bEQ;sT{ r ~Հà њ;&]#]iiV!\mJ+@I+DHGWgHW`WJYR VCW5\[ ]!Z8D)hGWgHWSDW؊b rJ+DK[(ꛡ+ۗg:} [ub ']VN(m@W}*˄*%Bt( iF-ZFNWӎΐQhWDW3+5 B5ֲӮΒ,Wִ7;WR\K)5Bn̎~ܒ)z-?OntR%xN"m<4l'Zu[yR;d#в\CX)0e0SPVtBY]!`YpU1;5^Q.JXIQ'X[ ]!\KK+@+n;]ߎ· ,) V]!\!K+D~BӮΑ(c Ɗ++E1+@-YK PeLut?]13Xpzj Q'ɻ\+OZIi\] #َzj BN .#et(9 L.8G?A\R c~8]q]!] )0R rvBk5::#b 0 qK¨_M);HZ`U/.4eiD)HGgHӘtݖcFe@R2[gzNN9lԬo]5Z<բ:\nkphvTia#Bӧ1/$&qq޿{y/ZɅ2IiB~?-ߵoI0Hנ4\ ~ UoGGP[dBnWtg\gE?GPky; K}}8 ^,oWhV=ў6'á[6_?>9^Nbqi2_wRpO» 9e>};̮ӟu o㪝zcԿ&K}<?_7'? ~|ߟ8?G%O@uKx5@w\7~;[Xgj{ȑ_S.Hdh$p@.n_3ein$zX-ɚ֌fk{HvwUyŪn bPl3Wާ*#$bڰ6{';SR,B 㖐"j2/ O{UbtQ6e5>Ê`l`є [%m:x& mV(sՎ-L@}o5گv4)V|=/M᝴N}'fe1U7׬wXN&Wgm߯ ˼'>{x 85l&K9%~#?Vl&ɤRg6߭aL6,]Lm2S#DuF7]/7s;zkeֳϋI1ڷ|0a6lbq1a{Ũ񻜝;Ynfl@U ;yw띻2R.f~,'=`э) @@.dR7'='cT\c;i.ޙ8M:'H#z%G֑305{ETD&ffQY)Tr)ޘ+i4xг BD>FdY"UdP/& ɰ$)J̤Uς(A9P`,$Q (<ʦ! Ck#)YzΑz?^:'|)tp:trb-XT) JL/1Y'8S_#3r ǐ WfJ$El%%+d+IȐLJ%x+"@E?%k!\L3Ac#UpBkIU^f G/cN74mP.OiX{:_`fښpaa_Y[ë́Vw/@x|LG(585?kE.g DɹRT蘭dE ֟G/z~)כ\Y*\R^ @ktF9"D%9Q,A=S kۥhGorv@o y^&x#nҴMU ࠔvigPJ $j(: a6y`r{_&EkYϔРg7xqi!)r\DHJ:hS1y r ̼e.$ ߪ.J/NesR$ᔃh WG6o)KQ20F9DF3Nm:q:f6o[c\ˁ)%O 2I7"[V"J KQq _NgcR8'dbv4Sit 0*ef%{uߜy{2hhJp9+*aG-XY$$٢7@fKQj_׈놶Cx<$][߆v-hM$qI,uQ|HY +!C1YAYW`4(eq:z4GV)ﳪ!ApTO\"PR*E M MrMi A.Hn)Փ\^g}]h8w32{w~ סyՍ}U_zpz-!/jΩd}cFZriȚr'vx;__DDJII&vF*2(- TQ[RkhJb&o~C֔˚ӓe͘rs;l͎н>܄ϋer7wvreO2%[gd`l9+XP C A՗TF|vR ̩ԣA(YDd`H}HޤH2Lt T%Xza މ^fE7^~ifBh̲Sr0KgǤ}4<&$$Z|{g|6Obu}J>[F Ox{O"io<.1_6z-OsQ[9/.7BߺK41F-)T$@g֪\(M f[(TSNؼuOPRu_PS ֔VE3@i~Y,zBpRM,,@oy;U[s<3?RAm)Pd뼓 ۨ,aI( H[ .e6!R tAۜJ͹֏81gyYY1|0:"PkЁ=yȀ,5@UP%Uj%7(Pu0T|:E|y [1 +^6PD%Iw%,@@&0h2?-ZN4Tꥊ48XӑёH9#t}DUnwoYU_mVXOlx  ɸN'`Ci=`Li0<: q!9e]DҌg TdK&AJF™,*Wacdx̖k щ2ι`)12i!8[6G)b|C+}E? 8 7&^ÈW^5ڝ|>KixJQzS|RRUSA AZIkj3P"SB%G&tAOayȍ*oDMebvFHϣ˙蟿wfH}[n1\vm58ʈ2kN@(^< Ma]k&$zpO/a5}1ƨc#Rڽ;{k~,3]2k _6Z]ّBu/# _/!a]].=r륯ڗnԖ.*Tbr~Z@l^xZ-I } q<>?rϝ#{=#=<\bHY)#jwO>b$pvF<݆f73R.Q `zy<•̿)϶l|~3gw})vVýs@ۜN7KЪᙧZc*ZMθM>NyW=RLpvQ$|3КدlaE$s ~S8'^9tHfZMT#uDkdBI(*/Hy'TrZx3czq/,~H6zx粋󺬽 DKHDE)H̾^/FJ`*WEM6N} JIj#.x[s= J5!3yPf;kB_/%Ք8ݥKה>6,WWh~aG:vF)m|ȥC(9CҺ]?3 [=xhIw_/!B2 3zI 4O)zr;6{.I`Pvaq/t?}Gz<5MG nWȯ"djһ!Vpzw?ӏisuObcVgJLn{w-|ojB{/퀴 iig5|߾Sn <5pCYF4ٲA6TtZ^xiyY\/ˉsyEC !Xt=>T3 bpdG$)zDEXB0P7"Xӡ~U G3Nz=s4*(T,Bj0(4kq^D&2ɻFr^F, '1#&\lzVަ 2½Leq82*-AG$x."pXE+l](Yw̠,Ph`) CI!gYJBk]RExKZ.~!a ޴7qC79qCx!gh7>g39<Q'WO1W6 Ki|@lŘ&&Zm4rJ° mЭ\O^!war1ی? j(Ԍ)Rr#DH>J2‹W!4Ƅ^Ay:ǗT|8Q#VK2g8ړft]`)3s_?v6nY+bzpq3#QXp`Vkt|vhl 4Zxgtgnd:bŖD|3rz!l; @Ά i䅅u,z&HƁ<b1Mh ZYVtkDK7WSBM 1ot&zXb E^WCYGJʤeLuT=HjitQ'T'{{ۻս&=A5X,|WptL^o]zl(~zZI2M)Կh $՘rVF?PAK ?q.o"?"PA1 Kz'r.dIgFBav?_fp;d "0ik#K^QNjR/;l˴M )jTJ*LH$Q4!ˈ(_\Wl:?#Y+O!Z"5bゖ3T` x1Рs1P޲p+7:~PEƵ]|u&ܺ)%8Js*CQMB&GhE-.(+@=N'w=6x'5}bSPN4y,"PrDI9\T'y>Zz:L!k5Is(y)Y?%DL \Q% <8UPƨ鰐]H 8|p GD,& ”WD8uIeͮL]ufL㞵ɲUr%?HCÏpp)Eu=aQ07sTwu|b\K]iSb4x_\;q6 FE d(vpNNe[U̍/9Arqm$ZYqzBDx#\.p6 ׽HJT_ߧ6Wm8BjxYۚD;zۦ+w`]q9&TD5}⊯6}8;Z*z_\=D[sm}7`3y4Ii3ܬv8RL/流 N5%ͷζ5ö6#dy6ɴ' r9պhs)0scnumnFZ<.H]_3}QbLP~CZ#"/~$0eQm Qކ$",RMggm~?F厮J4]M˅"o~xcI9~{be}>̃! x_57?ߣiV޲iMKf~v[vz]ôXBM!<%Tg~S?mD;YёT(0eïH,> F?0\=.ePF8uQ##JG.Y]R^oϼ:jԣW-)v6碭-=xJ܆հ|kD jnXݺ_G/G_צɢyʯu$زZ^-@1b6=7T4[E_[$~L~xPl,.ԬHBN˫{W(;n ?A8u>Q0ȄK]i4ӥ C4#Ee&y֚a鹰oICj&Psj8hAt* )GnŅsa=76p93I@hO5ZkS+˔TUܮ"CWQn9U'Z|vy*8kcЈI#jE۪yv1ˇ c%΋\ UƟgNub!YfG2Jﴑ"ikBN.&urs/,K+{pc ܤ9aQR)Ӄ XPRe&08oMT1HL451sD Dy &g F8k:Skp;$.v2cȏ1Τf:C8#>«$B F"j=փ 1T an ʰF%d|IsYSypHk)EoA[pn_dVGg7غeW<=['{zznJ%mxz1][B Zo^8'm$̻ŵ[i[q4":ʷ8ܨрk:gK :jI aP(M PiV)ɩi!qʐz^ q,tJ`UD|dTۙ=Y5.lmfɅ˅{s \x7.\!}_dx#PsvĹ=֭̓Pxi<8ckHit QG!ANYD (K=<CVl`,@e['!(vDD02Wn: *fWXٱ/kY =Hf< =)\$&6gE1)F2VfX@e$<9NZ$i\g!dHh &D%Ls,+ĺz%TGem|Vꗮˆ͏F#RpPgiHJ2H*O4Frsr o&{,a4%D*@8 6^'<*](n >Wi[sӈ!/Vbqu,ٗe^/S7JGt2fZĄՀ2ќۚh#h2|g7N1K0\%Ӛ`+u$ ufp}|@N( OdkSCVL TZJPgt%C#+|4hVJPLK;;^V83K]CU wI",7ZyDg}[|5d"_Zs &*OKG "*?X\1g+B<#DxPQk0,,W%:M"e$*y5Sfq}wF&  T칎;P%7TqIVn6;p텀nD3!GlhX}uܞ#trXqUf/7lQƱü9}e;8?wҜJp9qqD]u}M?^ 7c)ŒV}`tR[TU0ˬ)u]P⟵1'Rir©7UBLa2&_(tM}U-hBBY'q~uv<@γFYY$rr+O:N lLՎӁ= ~딼kʺ;|X ҭ65ArOYcۮp%]?]6w~<示填{n;rG~ YKG?v~ͷ?|O޸s|M;ʺwwp<Fw=XszG\vy]+lI9{e4+rXaK3YڤCZ,B&??j6`D%)!)㺔K G6];Tu. PR Y\c>єh&) NN-dM  -{%LꓰH]Ou42QǸָDphiJS^zmu]:k3aUǍ\+7qy8kmB)T$11u5϶Wub3ȫ#jʹT=Ja#aדgIr$%!I LoL2`)zCWR}TDt㫧*!ս'Wvrtu?<]F٬N]с}}+, ]e7tru(aHW1Gt+p ]!Gp( t3CzDWXHW&}+D+9:]e ztLjzDWPp ]eU>~(V>{ީ6/6o~C\C͆FpT-3X<G^|W̦ր9Ɨ()+)Qq1W޻߂ NR|4;J^Tv'c3./ŋI8 x_X|1$nqFd2#_rԟ1|%>s0̿zPg6ՙ '(r>;{ǯ|qnnMr3u=Շ9pX(ʛݶ W/%wL,Z(Pz0!(93\}-Ο J%’S}24"`# ӾUF tQ 3 +ŹzlKxtht8 R ]RƗHWK #\{CW ]!ZO/ܴR trpnQU,u8e+D'=Uճr|~{6O~p3KWC+P*-b+6վXPB\ϕ>^(]@bF<b7tUF+X*P5ˡ+.Gt% WUFD*4tHW ]!`AHo誆kBW-tQnD& tH"IW#% n (ø5bAF9h]c8ȯJ|J ¤pVq\v_򣲙 y< =:W_C44wyIbF2FFJZgjz8_in)?^$`,3s02e1A~Slll~HV"Yu}H5Q/ETgTƜh:%ex;ˮ7 .uTT\ggr!cKlrdkRd@ԡށҋK?F]t7x`^g r>/ĥ%]E"K+Z ]-8d> =@uPGO-nٸb0o:Ñ׼wC(S'CDc/tv)i;rl6zw{[xLg@m}}ݥݾRݙ%]r8W`M!/~Ѫ,驃vOn>9&U;ꖸWWJ?of@Q;*Yey@c*X ;̧2p@ϫ]~ı2|cvG&3&;@>GM_?l[2z(ZBo[ds$oUn:Q݇})yFC27շio?ָ Lr}z^ۇIFjgnbcrTS餜%QysoCMNthV՘Ycu*֪T88\MtVMU<`tl$`?44SJܺ(1&XQ,LoI]eY;ja@$ZA+ω[KBMP![%kdDMԴe]^u9E˞h1h驅\6޽?ޝ+5;b %StMSjJ&QZjZz B؍ ]ccs1 EG B%\&T{K`q7*>##G* <2 4⢊L~.jۋTڧT)J{A13X2|șbO|\S>~=oNBcUͪNTRϩVRmm {Qd. <6X}ZvNR -BG[G7h cm VM^TBa-|.`5HQC_*$ڌϕ!jT&b=i}Rl U9]0dTW)Nر{"u5rS`̙=HT'*H #-Ġ;*T{jsm.5뎾0Q/3 US`q9gdjb |i氾[683Y @RܪC:}C_@2!VN5ї< q'Yge0Y\W&+3\5˶plQTI1Ҽ5 k9gkBE (ǵ{*%nWe@Pɔ2 R1N-` tW jPX9M  / N$ (׀VߔN i𷒡2 VHLB3m v^Eo[PB]:=೮ ZȸAæ~(pB&÷l : kfr 9[4Rj]&%CBY3'Y`>8Q"1ІM41 Bq>hҴ ]| CG{;uYdcLB*Ca9i #NX%C񨝯v;ƺڶNc>񂷪[_ V=V>;w : mCka%a1-A7 /MăJ` +$_{HVp2:fxc/,,tF]xO) >@&=:)Xel)dC`F`1G`^BdN&A PG-C@8r÷ Wdz*YȩՏoT?/ZźsaʶMRB(NEO5s_ƿ_n>Eu'KSqEX`dq6A@F6P=RsM_ y1ρ 7DR\}q̡c(#L϶_I0.”A>aK q["GKٖЎe>tXv=ش=CɗM4։ d, *QXOkV{*Mf>@M7WAPW bT:i1`YXʛ; Z&[a-\i`Ɍυ M F;`=Q8Q= FVXOQpcJ C\1d@bz-…Pq,fSMXpU] Z%tǬ4k&GZEp$b4ubٹH. PƧ+{G5x#W> ՠ z?^ܼlbP>Ed:9_Fph*}#pܚUk4W&auT5Gj+~' 1-V?_K vM)/ANVM%ݿviu}Ioim׷s o~{{nW'ohï'x 7ku5i{vu~n\>8Okus m9ض)qwn㗶@I#t$ ֆwO OUvl/ih>ihw($1 dM$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIx@V^RȂr@kb@@k@@L9#I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$&YR,' ,& 4~h832 I@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $IMv/)  d9I KI1\=$;F@G JH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ tmo77pݥzu~U6Ϸ/Ck .{LXNp pm\Lp h;Pz1>V:.79q paԗBW!VttU>Up8j1t599P t+So_kl> 0z7mIhGLPwXte@WVSoج\\gBW-ӡ@ҕU>F f/\BWNW:FB:,`V7R h@iXj1:h_B͞ton^r'Ξ(/V?O߭.7V&g3nZ9~~ jI&xn14 Q-ZJ煦َnWW͇] ^ls|^_cpڜP_mXOӿ7Wr{m+?p}SX=gKKkͯ oo5o.ܻO cFUWˠOjw~3]xbeKnYUuѺ`kq<El}+L' vfGcWkkm^S~Sk1D|.ګEh=k9 N/El]!$4պꑂ@<'sɠDtBm3Q`[ :\1WIW3؜%.UN-T+A{(MGh(DŽ^6!\H%0쥕W [7]iuEf]PWF1RB2[-n23LVuR*aUJ3L8 dt.D-1)exJ"`͒dt]WDM՛ѕ:0Ք) z{+>~Co!ޏV 4!,R=t\?S[HFWm!{i+tu5B] HGW ,](ͺt%c}]?Z}g(-ϺTtu2"\Lgi>"JPYWcԕ33/(2{),wwqV&D}:pI0dHxPEg4tNQxL% N..d"ab2}r$*6֚؁IFWtBŮ+mc.j2k8#m•2]^WDEue-8ҕ/].TtEǮ+tYWcԕHHWZ&D2"ܡ(ɺz3f2ǔ~^Z =xK~jQYWzHtE'+5,]-uEf]PW.S[W0VuE`F+:M*~t: 2]R(5d]PW ORO0R nń)E "p!huQ%QZ5=BM+Q+di&4Q'/;-ٙqRH3%K(C`|~ґV x?Joَ13qؘdtE.[H+DD 2j\=vA\!]~BQ*u5B]91 pHEWHĮ+̝7+s`֛)o O/`9b~Coۏ tA?s2YWz.IJWp+5Skƙ ,L&'ܡE ,9D)xxٔ S ]Vh]BJu5J]Yf4Onkwͣ:tbq&9KHW !\5*v]YWoEWM_jM?kx&rO\-UOڗ^GW( +p]m_u\W=X]pKEWHkZړRJpN+Cw{jug]QWRp,!]dtv=zJdf]PW @ٔ+V$+u.]!6v]YW#ԕb\w>bչ[UdeY8'xe\¾$a-B)6SzS;sfr9^N(%`g {rJK-[TyPqa/HѩF(,m+j l/kW?2}t.B}=}O۲LW+.>sps=~>r=Џ—/d%.e5/ǟ+_iWv>H%ղ TWwO=,;GESs3=sO{u[~ZH[Rަapxe?Ss jHXE0ԏni~u[? *YL3_yqaoQ4ztp%6grJejiUPrv]kX:u9[VW LY IT;`YZWA魷*:`p8ԎځqPPlmЦ7  CKkawZ]fnf9 M-`~sfHfU ~E]*|wOlթr>|\]ۋ45Tj ɒW7ڀ d 8f2J+|e@Z^9 ҥ4^bZc–?~ 3z5gkf^ [Y{|bq{߄)y1l&7~>)]l&XS#CϪBOܜ ~#dO50w)oʖPƸPMs9i)(Fe(5(*[wpUVgOA;`HcL8 CB|Ps>ѹS7~ܘ4ekWƬ`XC*w)ʠmpUl@IK%bP1vxU:WkfkYW}*—)v}q@ёAW\tbb6IIvoNMx~u;>ؽha\/ڶ8O~ٖ3l=~Wd:_ 9<+E틋<<5 87 Ve̗$z",&|5PuhJwzxc_o/{_'G쉘~ꓠP>E#j}{exAZ'  b TgjQ_:pMi BL!?::dyFL`]4I)Y>h0PIU^-Sxǃ,] u:4\A#k mI笑M#㬽% D\GI2q48ssJ`Ńcnnw@ CaB=)hzgO9Mw ]x'LWPˁޡޞbU׺m3+/}u.&UjcQfdZeJMhƖ<0VF)Xr7w>5di! FKJ8WQ5fK22 0|:|4 m€j 4۲qnpZx[aNsઘGN}s+->?k>+6zR_Tt^*e )^jN8':Wv 5%ZlT(#kɫ:WTu\_Ųde"-X+z«fDmk_ԝ;]u$c ڂu:;Mflb߂(e>5i[pips٩=hCΞ|YGyA(@,,ӶeQJ ͙0[p7BaY[y:HKk_5eUAK/Jxux1lx %M(O7H]5аtpHڝYIm*)PWR4ĉu'r F͞MAi\j\9Yŗ>o-~K{ -*'4浖KB]+J%Beu+I*mg,νMM;b}&ebQ[Oķy8f2~aIXBsK*ja>h *E) g+6X,ͣxfz2$hk"KIoKVږ; Y*XG5b Y#Ot[""_2R7 #kKH wk!Pf*jF8HIBAFD6F*M:DBQID(6% `ld4%0V:Kk،1"kzˬ.F^qSURo %8Blвl+@=qp4t"|QJ!Ղ]Tt9`N׺|SCC܋h[:KAgI22|HkJ}a1$F*RLfE"k'/&:a) @R\%Šc0ɺZW1oFΎ[UХ`@˹g9{=[:sIJ+'tQlA^%*NG<&iYJ^󿍬B* B6{ԒuV<.t6;2,e>Xf|gmT"s/u{o[ŝyrפ[%=>X{ýȗ?O]ʏ_r>8IdNL2 BN8#UF;:M5 D-dǢbCgoFB_ DGLQJjflUf.Ԣ. T)k_Udf}exo,/P?8M>\c{(2ye yr%ztR(T" #!e5p&dMUxQ *jS[ tl2A @lTDvUG+ri2[&hfܱV:j# feFkR^bɋXe(&6Da,QA!$%鶍i.YBfd(źIdHyXG1!ە"QM.9ި_źUx(5Q# (c9X@HfdN">"(kvb]DRmc2+A<9:PꙄ){ KV:MKȵbsc،ω_ub8uHf\^zQQ/n #Z^=1EX)(m3EEUDV((1Q/>^<˹wMc}EubcͮgsG:.(hя/^im")r k5 [f֦zaHS$j G{;ax 9nx-wV\rZH9Z (":"g[d84]B%T+B>Vɕb]]j\ݽc{Unm]gwsF +8"d} I,dfiAcP̸g4E6t2]؟_[2,b٠]*@M(^% 6 oM[Ev=GniAea00Ubr*A9!A֚Y\䣖GAj+H{V ID)cR&iw@F(d$-SEcQWX65bXʫ˘ B-~,ͨzX}<]FWj7i}78c7K?N}j|%|Ap5>Y'ҳ!P2BIeB\3gfvg3Hٜ Bn"0WYJj qY8O2&y߆C}Dy%_J~D$DޡOw#].cD'Oz-GwI{6#9;8uTε_Uw:MtQQ3xy]-zݬ.:L؎2l>qѲ P'Y 4XEǨ*^XXBy0 p=E T[<"z)ҐDh()bP¥X뫓ۂZB茈ER}H!9|ph  Rxs &厸lzBCExMy?xwR>'C>tjIYfj6=Mꀪs]~Z/bO.ϦIHY˱C)aH w C;t~cm;L`-b2ܘǪ(f3sXOf=|(J| D&kl8[ioX!]COpEEO.F/M5)ɯV~-rȆQe]L;?xm݂wfm5x0>so_#YoT֎CK!r+i nov5ۇ~,n!Vk/wtOOw9=2Ǜo~U%>b䛇1o7Mdxooּ Tx.dAw[k5kzt 湻se;/7ai{E +AL~Wj9Jc ÆDP@+jE|ysI`芴Il MxC47dupK/QqC RF@R$jZ 9e0b,1d\ )Q5rD$Ʉ䐥`LJ!!D06BGK8y]wF8 LMW9Bֻ ,Ae6qF&]nPIh% M:$g%m]x7cμjˌU*Jj6`x>煠DnR倧>oTPufɫkKgRQxP1oUI[{>k/Zb|BfB)4'5+'RQ̀/\eQeQF" ]{BDƖA(EA"k$/1׼tH$6XjJ,%4ȅN*c)Ô9d"jF/1BU,9[mQCB0F(c$cSZl:y}Y-rN@2a|IN J UT>I1wQJ3(Zl)4+[ V(5I g'BcI3rvDM'Yl:d't뒈//v|]/073;~h-GmcYQ+0T}9Z:S:G Jƫڴm֓4%yQ!DICL>HtN)ê&P,Zw3oF q.wV)RT)9튣e2Vm\,nn :)Sɳ%VULJ@ky+Kg%CJ䎖Y->hJn~ 5Z > 烫q/Eip\u5*u q XHsJ|)W@sXg?0UX]/+ ,lރWT6 Njo]&mg&`Ewm$_amFURɽ+Vv[W@ˊeJ!%A2%Cǻeΰ3 M7#:}!n,l.qEMY>~g[_N)|GfiUVfX Etq:BP,([*/cDA1> ?0jIx["f4) e%[.a #h|D*ihrKEdvDŀ+/}V+d`P,wQ(('JRe%z#gVq5duAlɧkc<-|UWSTw84rmEPǼh3 z9*!]_NV:) =욺~.Vavטy-{)ޮ31k 8/3wx ˫⟺KBouKF@왕pW򣜷j\+wWz.ᰕCmf!JZȅEEG=݅Q5xp= 9F:lG|8d/pZOy}Cf(bC\2Rvnm[wN!-"!o0YJ$L?xyyLr@z޿cfgscݪss3>{bE xs=?P n3I]Gy/hka o7bOgluEfoNAGcl|,g'o<j,$\BWdwNz tl"pJVxL/at;P<8<K ڴWo)q%Z"9љ"0 ;k$WژzQzS9kt}s[o7Ś i>]KynUz ^WbPA04MT4.~5;xU?)]NYķ2^ph0Fo s.! 5|3~wqSYw.ţk2~LY{{7V-9)&R~%mJ-P ?kHՄ|mv!^}l6{ɷ=_FAvߞtZCc4x>|WGWed<)S`2E*n&>ѨkFNG̍7[R&Y/OF9G҃=Wcj,i @j>~I JG2P45AR`q(C+S h삳&Ut9exVepFۤ-}b2@]LIe7$>*CrnV:I9g\A'eIe1D@fodMI3 @Œ QFE1FDo ?_q[a{:ȭj-CBmN#}Wr5r3V\X7j&\ؤP7j,!M"̆(L 0^Tr)1' 0Hno{zANJB%1:%b*',H\ߴ JVie$بɲ^h*LpYgL$-=Yo)g\9\Fyҧi ([Y$-N:U=5[LD5' }^CO0'+ VH^86%eXE`Fe$dHRS7S+]_OzY"ؠJ(N7vXI*Vh``J܁FqP(xbKb$O[lr b=PfPxwɦb hU]5YA=H[fBM=慞"&|JurZd09))D%Y5s+QeA<[cIW_E:zmC,{ LƚIW@|Kr[zI8(5$*eH&/6c_Q^l5y]\m^K^W֨CTZ?u.c!LW]z:ƒ-3ZFWT-G7/d@oKjحSaIumR8 ˲}`Zoß~8$q&ŭ㈕Z ] bܭ.3_}|ECNkEQ+@&dcsr/[LƜ=t",t HɦW« h5iӣh=@R>gc6}t^I(}lx{/Z745= 2`1~u2^$Y:1\Gԇd I@ OBYt$ Jl+j%u=Fi3"w[z]e![L}+[?\v4N5O4ƣk:_zF/GjA}g{Ǘ_+!eg͊z[(Qbs1--Lf"dm7!XDnC_ٌ`^8Nld3j{FTD@~@᧐>k2kQ<~Vir~S0F@b$)Uݐ@}{M @(Uͮ Iabj(aJRCh BwT2h,>*4BC>NW\5Ge^}ᦃǘUt*] oaHEdYJT2DX Z\ ƴ7xC`VXmh=Ĉ^(Fd(ջxUzbt*MY婉BCYFTPo˜ 8kKZ +lXlq.X2UE΂-4E;Lz_MG&&N2IDƇO .jd2{sDFqEΚ֭6}sցb* JTDtVhvrƂӪPݲT|BBo.F!8Dr%]EW,e&cqg鍜M]fuKXw.,֖D^).+AFSȲCX&jկof/Bs>)qs&Πr$B6"!P/d3԰ ޴=z.:_o rp>:wgh&?s/s`04"$[O u*oRms [1DrR P] =) %n3NѹN66{mc6(S5EJ-"G>; e-oD!!FyڨإUSք[>{μoOc|*ߟ9|rs<^ ` EFYƦ(R hobɳo6 gD|uyλM A!cɌ{|63*xe)+zJJ؛IŪczyAssu7f5$a( 7g9B0Ȇ؋8=Xb-zS52֫/#;+ۦhL95 6@t2Q-#uJK~*3<7`qnޓyt y<d͝bum[cL2VNIu`KgOPQ%`pr Fׄ~jZ-?)er;4[a BkkC$U^E#j_.V#֫?6#dŲ1 S($ubOG,^Tp\v&'͵OK|1bVw 4fʦƽ]s:5E֜Xz($(g,Xh`'% Tg:1 ºV;-%dXJ匾8Y4+z$Cy k-Iȼ2KU*HKdJ[b&HZ39'YAh#/2dd`j}  eՂI2ZelPz6~y?YC31՛: ~8G^*~''+C m]ItˏG8J~QǩwAt xV_ro/,:w\c>Z&cGdl u|%Ϭњm0ҧxxz 1]6AJQ$@›6?{׍aq"V`d6 Lv1AHV[rb)e[6%'lY}Dy_"+ j%>MқuޛAy0{?@ow]pvz%?ѿ?o חWw;6$_Oߥ)74+W~i>'.ȋ70j>gǫkٻn/?x >'A} Ofe:y{[ov9r6vs;_1m7wg^_w&'Τ3]N;.BKy6`zg^9\X^wVg,r߬{jY72R'>/Rf嬪s+G}*UDR>?/*mrgV0PeWxU=,TIs`9VKV ɟXI_~oӛ~ +7՛?{ΩK#aUH8M_6SKȖ^o16]yΛ.W!_ VKaNnP7Pw;h=FGm$m,g#^Cbd$JCE)mR TJ$qٰQ.]/=px ^N|4%gfd>Zdm&5TXY"TVjir3jx+QvۊwahtPDALIb B09$։^N0{}}W׸`U{d)V9m?_>u"}ԭ"AcjQLjK}(0@54'+,QWRcu~?>kTOUzoXԯlWa_JJ-TJ=U^-^M׫{IwN1w:G$g3v̆찓$Ov Ú9(1GIl2('H0,I崰l̆*?Jj (IIcV408J(DιTK[BkIیMZ @ueWܧ6xh[22|3YV8fPOv׵N_,Z4CϮ 1dHKyUu-Q)P4A'uI8L`)J H5R`6Z2,%|dm:m!g :3E2R 1zP*AmYBu46bL92 ] *c^_ۗ$3CJZ5飓-H:(2[=!)eD6pj@ &a (vP0 QĢ9Ԣh Z淋n=Rq2 }Cl''Fzc\/> {R X~[Uܙ'{s)况\~V~8$:>#tB"ABH *juP^b2i(jiZ Ȫ''T|D \)*D Tk،5c;L6umu\.\eZ7׹Wta:6,rTBe[#   *ښNX,lUE6Tc/ +jSMEȲTz)/|VޔHֵhE;gcbEk7S668jxTdj)X/&; ?bb#M2AkHl,QhcC-dF؊N,\k(d0ck"])نX6#a}ܪǨG"4"e9OTfk:QJTE>Q$ٮ)|i{-ab8k%+A~A OTDW9ٻ}{a S 6&(gHkl"d?f4*T )Rj{tpݒ9aԵ?U[~_8Kq>tE6`?GLW?c ltNR` aqRS*#󊇐adxH#dxxesc&ikH4ar1TQ$!T%X̲T8<G!kqr"X-BX߇ǜ-]FzҀpVDtt1D7!엕^+>:ՄVKy,EPգS_=tpow~sv() jI թEWKw|dz9/u~a0l-|vvŪ(KsTOmR} g5RY=_qE^UV\2Yj;7n}K'c'?QqϺp]*cdè2Ӥu\ߙ"Vin|{؞8?~O˒}-I[YtjUfl_TmOwttz𼝳ny$~@~'7 {h{?_5;b'b>?0mWn| ~G<׃?0xs>Y3[,豝lNgRˬ%Lwoi:0ug{`]s[pbFLx nIwW0˝U1Jn-"qӒGU :Ztd+mbY% d ȞcdOjRrY@"&NJqk#gM2QdTbJHmi֍Qk(BtQCd ^Ԯ980 !b 6#ꀋj1À+R_}gb۞J4䰺w^J$ 9c25U>4K`D̦Y2 ] Iݚ&>Qy+I-Zw`_@w[s&yioDUpDk&qvh>xA *ּo\/_J7Q[Rɱ7 2}/ꚨ[q뗺l N;ŀ `GjiU~c:`9ޔ>Miw&N=Փ|^(E+Cf! 2Gd@ y]Vڰl56gR2%f^h1TTX Rb|9>ƠNwto|LJۉ:I)Dž\󫵽"E|%SWCq;1.Q )DoRJjfCk3b9*.R)^kJ jfq~}f X뛫m'd([ a|TkzKP (biSsDu|:Ivc\ f*^٩Rm QD 4(1Em; r6 H*oM)퓊PF-HC!/uf9%e8bIa1"`a19 \L%`|V1jx`'P6mBt-p¤,|+ :mMs$b&޵6re peG]bo#c )1HvW1_̐Q4$G5,tWwW}U?MLaMtZRmWbۑ;-#]\`RZ{v 逸sDu" @M772MP3Gqx)FO ɱϑ c{Zr(R%]JK,W$zyhV@QAsglrD`pFj{KkU9$gѸxzv`J'|24|N}Z I{}zX^[T/i -)ZT%-JcCŤZ| >*}7p=dnVFnA}MQRڤ0O&ŦbSzcF!ʨ/)#*|-6ՙ 1>U-_ʗoWfnаY}۲ԢQ{EZִz%yyfOe?g0 x\,i6ܼO6GƆRL'n$#"GGĉMBzߜi8Βkrfy- i'Rh#P9yͰHT9}ԏFGx5](1@U#F`~jKUWoHZ?͌1_jfEɉ Z8UE;N5: B?GIr4.><4/Ti.sHyVkW kJ5tj !h$u!Vg'79--P oi B42PV՞(yp$8ϥV@/iI(ٚnymWWvE^xX@KkR:|j&$s*`o5U.\$cv}pi1U_ΦVJEΊ˸Rwgif/fM DG1ߥiݛi8Lro>HZq]=LJJW좐L  0vDvV wY~kq<{:  @ALnVIݞ<_F^6˛b-[2 *["|T| YE`(`Q٤x oY P@շdLԤe`/quo3v|3]:?.Oo{w=Z\W$.]$-קWIJ{zpš1ʜj wJ]^d_d$+,;d?7tu}/io,Z n% AdkS0mQ&6PF}h4aG ^HT/Es]lyp@3Q/6\x׳Ȥ@5C:0sfd0"F፳Kr5Un:ߨٻ^Q< $O˽|"{7.Xy8 LeI)0q#*jgO.+P ..<Q< ;ԛpZcd Vz}3_zݨ7٣st' n$OԐ>*!Q9h2@+HA9J ̑vT[Aݩja< fFdz4RT@37:s2I^RF+OjNXwU&qkz8\j=\ ڤWK +M\|-}g"6.}uVr**m.ӟ=Ek4JmfwUY.4E TqUSnmUNU-ނ jyX0:.~ N̞$.]akIZq!ƒl5] K?\%q \)M"WI>fϫ+̈́Cp+;WI\љ$>yvRjҳWWkZpe囉AKSL&<Yjyu}bJ}8ÇR]йp J)TRD?fL?J[Jzۛ꠳- XcI.^~|~paXUMn`N+҇Tnjl>,JA)[.LB CsA)^Q7n)4(YQRvsJpV𦎹g,6벸+ͮRfKd⢧Kk^~^ƆQuEM_ӏʖ/ @HR>0wG=e 5mi^HHg.ݣG5T)JsfWӿSBçc7?s̮ `SQ9C)* =XX`rـ- 2X8l(WJTr)ly%wJ֤-L[+/6|0B\f;4kTfYm8%^**c8ek\+.H*_y+52E^[ q" ZO$RH1yaܾ_@a1'*1,Rg2DA!>'k/%M@7Dd0hUZFD1Ѧ2h(QHYuzZ34f,`ZFL&Z i*-4-]9ތRmӥ9w*dc$'sjI 3ZMjQPwC]zL/83tc$D`f;mu$Fo1s((5 HƠL3{܃Re%(4Ֆ .6wmY4Vl;0016ɗAխDʎgIj=۱6ݷ=:-6J#TAv̹a0K%BT1Ѩ*Si-Fd5;KӯD#!Y QE47:%^>$ƕ6rcFgdɺ>(d|Ż`57TIjJh%dlc9\2'%$omڷJ0&RatSt5JI2UmRR 5ƯZ)TNF" E!-юpjh)A(xK ě!0mu*T.P(f0Xs:G틤t0$a M*|JԴ/@妤gE¥2u:RȦB@*!/ݥdз#Dxz(`H9fg'dbxZΡ]K :8U ~4`&\CZ]@ע1YKjı$6j `Qk:*0TCFxZx)*u FBb`GC7,,*`ъ(X_wIAkU LjI cc0N-,5[骐RC̨lH`G  s42, x2*dMg0MQt"]@ZƲgU d0Xd2z]cJ˲fP57P)k2 [r`5d!Hc35Ӝ1 C2tk@AEGܨaB+?ȥ㟚 _ j3ku2Z" BYu@/+x絖4*/אzU ($dA EHH( 6j^cX{6 {xGP"Pe$M&2ЙyoIF1#/E5ќ48+"$%0!M@ߛ!wn0„fs-+-cMBzTZw ܦB%a15W . Fq]t4k+-$ @ª20COZeF :#/3* <@V :"T&dejEp9~Ox H}(kIv@q:q/,TɂNU~t}%*9U#rugE5jJI";> Ʒˍ߿Xfv.ORI ,[ 4"(Qw`ҧ5HA/F9ІhTSt]/shelE)e`7PJ%lS5 YScAZ %HEP@|`C&m!Q9Mh3[N7$- ̓tH)YC Ǯp6 3*H̔d 7J@? _BqPDYUjQ`fyh-bl-'(mȂ ـ`%!ud19Mh% Pfxf!mӤGoET^P^`A-yب9*(a:JUC}%w SCb!h\cswۜ,4- Pxf:$Yh fU:9 4Ih#mlaf=Clf1j[1?kM!J(A'ՒGhm2&F~5<64#b@(!A/kt Cےb.C^ 7"*wS[hG-Jt9Jբ5S`ѥ-@OOdPL9PqCoܮb2;@?vs!V?Q-oTb,M@r,Z7y'dy ah`R'J,"h X(#Lz@U hH?x.LmDVT 1tVnӚPYaO ywQXAR)HUI22Bha3RH~\뜼Y%B+M?Cb-”X[;k*>X[(m@ Fx8XBVp*@.69EKzʤX??n D*Sat%bPrₙ4 p@7@.zul*)VfַKCQ`@Y 6[!>E!: C$)` %T BBcZOH]gF@ޙ"BP`j?ozBد͋jp6$U':(Y"~=~8>U $&Q1ܚ믯w~׼68z9P8镡l+|X|W~gdZ|5]L|ЛsuӣW G:[Η''Rii>c..W/XV֋i{/V×K7$KfmMqkk[I #&Q2K䩑|SryYsѺ/޸QkN'P4ѫNNNNNNNNNNNNNNNNNNNNNNN O'08q!\~ (N?H i!;;;;;;;;;;;;;;;;;;;;;;;p'HUK1O^JΧ3D|.y.N,~v>/~]X_VS6NN Ą& :]Ho~8c.gOb#/\(~_{WR_3{u(xzmT16Sךo?mH(/}q|=M֪BB z 7[(<wh + W#F2]}y(3~dr](c9{?/A'%.}}/Zǭzrzq{vogߦUˏG{D^GOJF7uo( i4i2iuz>jQJTc0OGLe.jdޏ [Zȝsf<;Z<=?>2`>Adw[cfMWgG\$,Z;2b [JD\LJ]5mtz%ÝhwzWc4}_<9_'گWkNR7xU}k+믖Zz|=r1:wwZvPi٥Y$1O"d[AF6"~}6W^6Crh9iw-Կnjm7/}uuv~9s)=UCNZ8|+9buɸK;߭2~tZs-BLh4PI-El-BQ9&Y&DS+>KR]/^=bsp9 gW~|Ї7}BZl_Znjr*x]O|an_7a].s~UZ펳^u]~kk dY颮 yFהj"tlOg?泏$`9}[-=[ߖ_W{3UTϥ ϥrϥܝjaWh?w?tSN8{/fp C>F2qO1q?<'OgFoQTk)% olI4&YN엋D X3ߋQ`* 5UfH:E_He|tI.4IokTBU'UQ%/$R*U$aQ$"yn/H5o ުy|{~z:rw:JlT.PץZ+r9cDTx]/6P9'2RYŘrF-1VXs{֟XFM.!t.Ohq(Ng'W|n]77P+yoW!(ah ߨrGN<[ĊcJv·gOݚ2%Fq5s-,(B u2z+|\zǰs;2UF2XŸX=^OV̌Oi&%ڤL/n>Vo>0}ͦ`@B+5TR(^ Ut+M.Z$Ykm$RWO.q ^*JaS;k(!6eb55^H>dvpnG켘hPV#MLqdPDjHvp4H \&Y*e[bTX]]!XxѼ [OJS+[GfWaepAp}=k_/#X}N_j/ӗKvR;}N_j/ӗKvR;}N_j/ӗKvR;}N_j/ӗKvR;}N_j/ӗKvR;}N_j/ӗK|;vn5^OPt:0n/TA<&vmI}4 1>xvrelbB;mzs|FZwEk[N69J0'Obu;Z>1>l,+9Ce/]3ۚj #JhB(1A?ǭ 1"H}TX )HYѓwΖ iz@P\٩8|B di]$+ 2wMp|vZP4s8t$Ȑ<%:bqEF#rmm3#cl…8 ݲ:b<t? YJ[?JznY o(0 먈UĊ&"s ˿~ѴFb8 }4dl vZ g}`RX*%2OqH "q^)R 8"I(izD$N0'ϕ5mˋqIce, cNc>^|8nu%(N8usuvnFې^jt&Jpf烡 ,(6#_ɣ @rw#A)Ch](jG(…^3Ga_%P0@mqkB!Zv -0oZI+(f}b_>30CKpML)DkWJxQX)gm l:XRIC?pN'(雥F<ͮnUZ~>|v^?xd`&|a> Fq88;ܮ8Gx|V<巘ajI֞_E[7Dk75lZFR> YLx4l&X96 c+A{mkXrkBGxyq̒8G8Wb2$ZD%YJO__K̗ \\y˖n f $U“xPd&Qr\QaV?!FǙLaӋoo?yo?{OO3YppTYN /gF?> ]u 5]SŶZ.G]O6v:~_^W _END#AyA %6 KHx7n#q!Ay,Yp);:R$f,:eJǸgxLj~s С=p8ܥwHAlFX>Eɉ&\bERsx$8BANP;[;"/Ow]nvxרU=E^ 蕠v־OwӏxBls((᧠ox@-"_6le@m#GV};Eѝx cc!X˥e^+qAC,H;+\> Eqϋp(VsP?voӹ)Xg-JU^W^_} }] ,Wf9y0EN93X[eW]߳udgP2)h"0d,Nsɱ2oƁM%8Ġ;Ϻ?P?댝R[qy+s2r0珮R}vXj<^y Շ#Pq]l$;Ps:Hs Gx~ ѓuWs)<=tu%Љ,C#/kQ33 1D #PfC(R$ƤM)&FtvQ}C 3v:HUX&xTXaXR9ׇ-CHaD5 Bn4O.ORՉ[*dN*Jy1=.9`< 1|8NopVyF:8׊YbK-M wWg2cE8i&\GRۿIP* P|}R`BI+fQ(_v{򝂢.$}ͬO!`k6(*ķI32Ѱu,4\L_=I5%TF(_Qh6u^\?HzO 4P+K 7j88qvM/䪖 el)2bryVE#[)|źi(k\=;uI'Y[^SZ5Y+EGMOL2k6NT*nb nhCru^LsTBt<%AhC˼^ۓ47(/N/݂Ja:o|Rl9m< b-z~9F!r47Mʹl Jjt=9Qmc}CazOyU]wn-wמxց'N3y`~HsƼm:D;K8A{#VԦer9$:`cyxjɠ, bnTn*ҡ'3vDZR_IrǓ]a͡v;jL/- ɱaPaRt! ! P$ԙBiDOy9NcPp`(3*h2:ڀmT ƨT$wpgܾY_>N{lCY|hWo9n)nWYV%z4E=U+=t*RhշX KоBΚ"̦pl@+H R+c.0jk_{r|;ʍ{6rt=f!`vG0I]O e7A;:u-XX-9|дeVVЮ?dʏźX^+k)L@yM>jb w MX5u[]/_At{wI=.qIݽr٬Ā8ȜsEX IUo|7_<*=G*DPZbB KEP9&,B$HF:X}>"C2w~nXrśvc&oe ĪJ:6},!"8s1’.3i@E D$0rϥ0{i-_wc<^H^0<>fq0j{Dh ċƠM(OdeTm =Rt'H#m/Kq{ZNz Y]Dn=Rv@s9JGu}hODr`P^D%E%i(XM`k8OnzzӍOg=<_-DwE3B*OBT&Z8"͓?DUL%O'' Ӻ?[ #KG ^IW*)%W` k \[vұ$}tzUhٳ](I/RvH_B \7I>1dr1NR,a GkdށHMXx+Gt^9E, mB[%aX'%}$ JFY}S#-)ZtT7-{4{Fe7i^L kh;3lCܭEJH)0Wɓ-kv%/!/V7m:jݚx2)㵄(BxI_!5^V=oպ1aٸqѕNty]fTm$؊R"+sz"zff>ued#<6N_teg@  y3ˢQ_zzvxz g$ q %PWYqR+ѫgO7dvًHz19Lq^^L^oMd`T  /OV]ʯa(N~?]&=tԅ%Ӊ BggOg'ͬ0_4gO O 4Uv~>2sͣ_̤$Q0Ո`S>1)+ʳdQó hY`1Q?Bk5|0˳`8kZdKv&aZgUٴU5Iv}jRl%ݙYZܖM,)_ʰpP5 -'v-ҳ.$q]';nY$S%=h$Ϭk4ӗ7b+rSyj)R.2m0_v>Jg7e ]|J~C_їb+>[pyqM@pJ_՟eO-~3rEHga9Bab F݄c9}g@ڹq 4#yՕ+m@^6~[SLbICCs%jsCNWnb u7_\crH0y)[9AXp+u**šS¨ TKdA' i;*-,&~}1Ft g'3V=2 ieȩwqJ u2Y,E~HbݛO ,K^I]Ş4=LKMuxJ2E?ѓZQդ0mCR3Bzu7}Hj-;x[cV9>iXוu乪~s/AsX"wAݻwWp5B >xişl-ZEB0xc=Q/BD,_*2dI$,drhh0KM!FH4W a+SkgFRlͮ췔x Ԙh`ر ku}^1Q}T=(wS̾|WJ++TBꖃI+{*"r$|QFiOI9mMyl٨v²9|0֚ԆhNN$J@ijJ x""eQbLqFC"ڠ)ov!SpB8 iƀX%cW ηg M{kL>_ #ߐV|uko埦4m>_om>M^~xAxĜQiކրM &>0y޹u8[h;n#Ylg#D0%k޸Rf*[ ٤$2yR9HJŸmy(^YP`$P0,21Ba lɘNlgz`fG_٨936&`] l3 [$c5*S|$⤛M`LQoъ|*%e>S,YK$WNhCG)cYj&i o1ΩD3CvИ$PjV(>`u1֞69(1Ďi1R_[osLL5zL8Ze} ]5hm)*Pe>k(j""U`^h[djƷM *Tb N Az1 Sb(˘K&) & ygwbuCv:U/U7PD"3Y:6zjw=4!Tjenp^^FuZ$h(xL:yYBH΋Rˁ,q}:Zs$W ٕv'OA)Mʠeu$'Ӎg%l)ӱo}VR(cDf?`c9C$ 9 drj}HuJ[xk;&٤PWvǒ&fsORm*9  '_t|@>nʝ_Ul $T MS`r.⑄͵؜SES$)!XD3؄l*Z5UmϾۤj+9xpE)(C Z_I~jcloxͫelͫ:o>)/ kWy?`O?VEMv,B1~Gȑ)@|.h٫e$w?o?PeŐ}ύܗ,t҇ͯ]B}#,ݎ\msm:]|vb᧫g~.WK=/X|O.´>>F>Yn\ŧ}o?+S,0݋3V~@)`źipBYsPvT,9HLZqkP +")`wpYH/ FۚbgAڔY2{1GPGƼ>/S}qv)N'SG̿@s;V]1bws| ̢+c/uj"|r^"i9աvWPݩŧ/kNK"}<醷t<¢B_$ߤЩG2*ֻǫl+?~Yȋ_{XXܧ,?rsAMK\_.]\~ؐ?[i}d6?yuUbCVܩd\t bwj-Ҿ\ohcQ:KI*$SyS|[:DԮ{ޙ#h*]25kKQ.Xbi!LzjjW]zI-, Z(GcC°BNfU6]XT"tuʘEYg66G6I;/\އ>=x~*Jo-NYHJ zvϭL(R WD s>6h%E@{QF{WojVmvs|Q|-UcvVZRZY=zl%!)\@urtaL6E K; i0'-@ je0)()O  Ile˧yS 9Un|n/uk/|LmAdUݔ3.۹ڏd2P^ ŬKV%%c pTDm3z~}TtJ]HV>ߩ;].w!rt2OooSSvr{#XXZMyMv0G`gKĢHRǝsMt]4._ɒXJUYBELzd]HBq@zOQDTBȜ*L"! T,J{JQѦPȌl j siMfSB0`pj"MmG2wz݄eCv>+g:Kd",%3|6"ɨ'3˿Q_/mLnhhu3Ԡ3s`P9+soUL L+-ɷ([pa .hvUUkE+thQUEF3]]iRpI{J_W[[:]|&{%BVփ O:e[[\],_dWѝ R9!ȂkA+{*?ޜ*)\y7DPME: 0yڦ.Ʃ_̦6G_׏]QĀ~_& wj,R'VwofԬRu_~Ver{%WiWvc߱Eަ}D5XِpM3ͳ3u)\Q"R!6DW5Ub `tUQʒF1*]1`'ڡ Y;EWS$HWS4IKUh-N*J7c+ >C(hJ=Gj`.7ajZ' CSWz]陮v]z2!b] e+tU::]:f:BƘ[ hdʩYJ V` YePa( Q;+D# 4*Z<]1J)f:FB۵\u੦S9W:ju|3ץ&=eGo_z9%%Cو6"O|.B D; [9u)\Q|RcKtŀ @3tUhh5L*Jg:BbeKtU?6DWSLWGHWIRK9 Xv5[+FkU0%tublYB_zCǹf;-r69oggȿeQB3E=u[QY5dT++i*Z%NW%LWGHW^RPђNBUFBW5nJvg:Bbka+i52Zr0aNzN;Fr8R1\ hB|Jgfz1te\z{"0/ā9UrcЕjץ4޵qdٿ" ۦ., 64E"5$͇([TK6;eTWSz80To S8rF(Uv]JpB] '*ZNWRΐ$h+M ;`+u_J+NWR끮ΐ3vsA]-K6*sArsx`yT%/4Mhucv,i+ϓ2E1<[ܰ?G05iz{.֨J<ÛB$qFџa\{3.o-`_/ ng( i6 =+xJ"tEq3ҕ>  "\ ]ZNWrk@WCWN(#"'B\tE(0Е=%`VWg6CN4 ]te:A2gU $M ]\BWTI5C@WgHW\=BCWB_ r]+B)@WgHWBX 3+{U]+B98gIWR1Gt9WWk;Ut.tZvVL m ^1ՠ4xF[k:s=pNz]0''z4sC-w4G+KpOx#w~ PK?G٫tOAElYo ҁy"tute} s]\nBWVÐ9;puL3Hh:]!Jtűu A]++T_*u"];`w+a3GtE8WծtE( J}X ӿBWWF]ZNW s+Ŭ(@^욻K 8kٹ:AGhiT.mp #`8m4duPP?YΕFHaˡ/Rо KaB P k0.vZM"uB(0{teVTuBp3~=^OWuutʲ>9g\:Bw~ 09ҕs_+,?tEpyoAB+;?PfJê^K4'_ : U3Dt3(tL;1USzDWX ]\!BWV옺}J0a]`Bu"['tu>t%O ~-B <լtE( #])i]ZPW23-J^H|&`O,MOkHC' !z&Br`wbi ީK Ktb+߾*ԏaMjV Ft&l$be-xQXsH.Yjg ~jta:γJ'fF&;7?_>|p~]_#et֊JzXAGt|G?9?o6η(N>êeT6/aG[|~v%3u {!MN{>%~@_M6_u-t>ı/ڪ'>?OJɞ⛯c 0oͽkS혶Ƴ$u.E#xބQQ8\aIC.mpn07&f(%ĊR\,ZJ9Ykc!?UázY~--~ⷚ,yt6եYcUaI\4 rCx1D0wo7[_lY2[L7N5} }2#|w6@ i#?,B bO?^iD/V]c1)_sNW //!ǶB%Jjr?ػR.,/~JČn9Rps\;#gfSF|;<ہPy^o_AT!,J~΋uI'Z>+3YlO`9\K+alk!P[g%T&z{ 2'䲉P >*J\/V s9?̦!ot~Ewf>310)*gd$J"Dׇ1@oOp ؅? yx,ˈţA" lS9DAVwy./~%GJKnbpJxuK?#I:<$l cFQs^)j05E.Q٠꼽iTe.gP/vqUnky}mو6#?!ir+ln:`U،sEVG35?di77<_[*f\|>G:0-IW| t՝'f֔/m2߱ǶOe_tf#>_SVZóy1˙Fy+0N䷾;=摐_nOs !o&xiN?l1M4{_Z =}NI_%UT۽n3nV};{w׉axwFV@18i;Q*H[|x4B|UMUBn̒ hV"]eUrV9&&xCnG{Ðۑ$v$CnNIɹ4SJ$/Z+ T>{_9eLJh5.*od Ttlv@B%B^"K,'-J!<.Zm}{XM+Ζ*xH_L|JA S7lL}*`jD z4zUsVhM?ۘ\}YhUJ9,=5u.k/'QRlT"+UX$W*Dwomt1xm[g6#r̙M:gK LNd 3b"GZYB8[J ߪL0Q[ctO { ǘ4&(D\[v[L!O7_Fj!XarذV[⧩׼b(z90c_>u>g0NǜP )Yc)ǖ$W)*lV9MY2֪yV0\p\C7Ƌ#,18-6gcY0Dv߿3ӾÖ1|aH<̭ + bd s>a$ŃLMxVhXQ$ `š{rP%ZWJ !2 ~rZZq1RtZ1r0¼񂋀Ǩg,>E!IH6 b~Y qB󼸿{%mFC'؁FXw8Jr K!tG$ND*hŃјRDV"+b""~9?Df|GdADJL &<Ljnu`3\I(,7b`Fmiu:Zk!Zo:9Aixx*,LRI6˯>vjtTZHan܉J5yV2q))h)y]*,oi3k0t|QtJQ䏣ŔeQd%9r(}6[! 33: <7ET.y ,2# V3:dcyUmY})зKmQoO0h(h37QնTH|q}y_?t*L YJsWÎ/& x<]7v|,h@ ;nZ@9 l)˧{6Ss_F+_.l};4T2z^XQs{mftJ )#DBd>ΰ% pJUh<,٩w%dFnK` ƣQڞ-Y 1Zv9{aS}>S,w W ixυBs m* LD˔1YѼXnk"C,JޢM䉳sMnh{LƕO(,z2yd3?GV3}fiSZLlց%@-5c&0)A3jzso?+wB~VqP_kSA 4 >1.STjʍ^YHS#Z%JxtU }!"E Ր91M)lAfj^޷6}*짋|",L8`YdFfBWRj O@& d^#D#d^> KQ r\,94jN^F80Db) &{^T1P-W!F!GV6j͙U>嬲T{*ĐS X1tFNx}k|洽roÛ2Dsw{ty;Et3-/5z>e>95O_@+DrC DQA꼣5Z9X%:OZNʼ CCxQ8x:i-t1 \>HEI *WW$0OL6p*lD8R1Lk B 9%ٞZ־\=q+P#7v뿬+ii\'9Jr^[zy4a0w9:6-}+{ݳS@#/4>՜,jVXxjO\/@ \ߺaac06Fнd.+S.YgD}5_]U1USs{kG/#TMG9փ4ޙ-D#?o>w_Cvw viRuiN]8kB>k(2Tܬp1??Vظ\ ǃI|CwO䏣l[͑[=ÕS z[V9`?Tᥧ*,>_ϋ]ͦg 677Y /f/>3鴙⛔]8gh.W~EjԌ !gz|_UVD?KJN'U ܫkϋ_Ǔ5¾۲ŵ _#19"urqrAek3 P(G;9z?&ڥMjV _)UkyM"$olѢXayI_E_DHn>_C+j*TΚJd ܓ-DUXj6$ߡ Cŧ.EVykl>[]઒/,],duo^_}3:h._]2L}5k{6v>ʶgu(5\PK߫kO<ϡgxa>U?.}E5O]4.8wW`\}˥yOsS E˟ ܴ!ei>WlqP찚^\ߖure21S4ƹ~gZ{f}n9ܛ.T]I AôH2AM{yΰMMxڛlbUy:t׷ p6V]ܛwur3]{f&]R9M^>-"16&P+9: ̓VbBQ`D]_9: 2R`F%"H*s@0}De <ƞmwQ!;ąGp>RNqn>u#9iʅ˫AlńM:ȏ!NTJ 2I|k Ϗqw~ A-?^3v<{}qF+%DcdQI8pSg\%JC`?5w{sz'ck#c"0(VSE9)b`\4%sȢ\ހC+軯0ANE!vvKo>*%d1WqY'&}J*E QJpƒupiԚ:\ `{c89"ZVcF"ђ(:{Ԧu) TGiux vs{q`:Yp|kNIH>7Rk MxNVp /Y*D"@*iq6A쭝3  3YnQʛDi2'K5ј >ձtF}:=&aa_{,X*ȄhJ+84p"*&Qvd/, Mz}Y>s9no:B\ƨ x,FR0,qyO*\$&_'/W-/cWLAs7l*.L{$ǜᔫs yƋ|:q$0~)҃qWB #As0-}jEJ:w>)F_$\ETB !(ga F-Q21`$VKQc'AC@=j(2K.]x>UCXhQ&̺6rSM|d8+X{bs@F|%'E4NrٛޏO_]F-ΗL7`锳Gmw9grAҔwÎST,XKZsY,n[R=2D)FPuLCXܨ/W`THAH[ u V|>cK/E%y Tje:}(#:\V b*!s>()Q)F؜H[k4OYY vV Cdz_}[ˍǂi[Gy;={$G%g4I>ZE?\DJι/qPu19K10!t>+xH˧3x+<覯 b~ak~pL\'0r OpaT2`U^ GʪJΪ5b%|j|yNBKB^\:<*6 mnbYOrZ/VqZ}qtâΊ_K/;?kZjqm_|f6=~7Tيj7O,'ȍx>Y|ߢ^_sf9ip\MҮvt< % θRDKD, mi) x6B0SA( Z-P5Dde^ME5?6πzwVmj,=9.:;)+C.X E"6ʝ%T4*擎R8#ތ4=DJsI;db|2e{`yx]+`ZKW5T8i!TDCAe-݉c}|0i .쏟K(MAEGԀ>20AMg"_ZQE}ƅc˸ @0$򆤑E!,$!Y*QZAΑUQkuni '=~&sJ,Q;ӎIH^3~ƚIW{#s]| YnY2Δ-=]>5=mu%\Le/iEP4njr+ʌ5[C ? cD.n"g4;o 5hd1aAa'W\+.d:CxyC[6E .K6\Kr5 ٜ3sf|Jܿp8C*g\WR"U곽X~FtvyqlD.5,'aV8_͜hy팮^m:QݘЯwjZuV.;?V74[fèPyt4ܮ9Gx|Ve}췜!^BF2u$!t6 kF::B0W g0b^ǣb1-/')iG]dۨ視Qy2>?'!w=B1NEVz/o[: ,Q-篋ſu>Ej*t e&".)pƓK<颢;6 a`?dڮ:uf @fNf2;QW[3(ό-)[D%vmH6oUzsU|㢈g[t?_{߽5_0 rUR$%1q{_}r|[g~Z O}i#mbf`>SoZqlu\̍ѝ6Hqp€=C5"Qcu**e6䶣&{c-)휏|6{>oIa8Ř &9vÈ;u.gs siNqnž=m-Rۗ< ǘ#ܝBpu1}Y&)lV촰l'5#)%)CR>,ֱA+#ju" Szд%QOXj>borAoM|Yv>Ň=qiDFfp5?w>&sO.%t.| y./OŚute^(a2]ڥATN?PSy IK^gU9gSIZ* #9ߒd>S%guRB)FWrE'})9T1ER9{[YWhl`&瞫ˎ=3޺_Y&j$EƑ+٪\kB裓:g&QVPT$6MzjDA>]Zr $V!R:GkK}Kg[ 9yڋCY:՛tj+cW+7Z?i͑oɫ8qhO?rٵio50*=SL1PG8ʌ>ݎFFDBe묠Q[-+JZY:8[s186W)]lHC&cor&cJo,2vdp •Vw30xq>}YZ1tWNlUF\\5lL,䘕AX }2>dwéʨLB@5l*q4-]J8_}aoj*U$KĎgy- {]M=S&jO,8DZltґ8*̟ \1$#HLКFOxb|=MeԈAb4i\oƒjCOST*3)j{6|&Yag\m'j+Nj;d+䄫]^*<Wxp%1 9\5䇎R W+bڌWm ઩5USՄĕbm WM.Xp>v㪩S0xbvҍ+}wz1\Azj*L:@\inkә뇣(m~>9u4n˵,*բ#>V/:gG?GWK3j|$InZ>o>{r6cdpnGp)IJH% 2Yxz>ogWi:^}hGkÆwW'ur9+Cvewn9Qy|{4\r{1_Oo|F̒b*[]r.8odk_8xo@? kdb7je*N);t>ڵSC͉TI+3Wuf,0ԪO6j9DWh#yL`v4jry4{SkqTCĕi{( XxWM"9^#p\54yW+=Ӈ Ϫ'] \A-ܫ㪩t<q3i5ǻjry4USky踂JK4׃+g3ᬳVcAWU{^$ZEr۩4m+pkK/#u'cUXpj?t\5N:@\ggp~[h,jj5 WMQW[Ոp\3`ur}u؄B`oG+ecUS;|睊\[2p%a7Sg\"ly㎩-MF8N.4rϺkoR)jvv~R_՞Z9kb7Wf`-4Bs[1duX)9#kg>)cjN6NQ;tU+Zerq:nGSN#8 Z)J`qɵҡ T4rnƪ1MBSr4jrَWM1CUS6o7ppxW\5viz Tޯ^ ~wVڮ9Z\'' byp (|IEJoOKh?'ܫr KʠfYǂ{@i߷_Mנuɼ/mNNS |,DxzNr 'ƲiyV(bZ8^ț+\t|~wv@~{0uGYW{}xiGoSf+p-Js~)^޼zwk1G(G_ղpݬoܡnnϞ)hk;4#) 3xO?ޅm=%Pf:{Hї@KA)ҮiALQ(dt5.HdP }ﯖ4F/ ˒߆&|Wb/\P 2 e5sq5Zf+ZQN&jV,'2kﴡ.G)%V eHJ24ib̔pa\&]m'ݦʖ w3 i*rf[0}f)}bmtPE6J%'90*6e ǦOP ,}d91d.Pmfj ^FZW!wgp6RZI[RXTudU'ePޣO´=r[KB6X5Ǭ:Q1Q9£cɉ(CfkspEK3#ݾKPd- Z(E,-n ]-͝PL D!(@"!b*hh*Bٚ|ANO•9!VH ;B4\Z.NB*7{ IR %H X+A#do<6 nn [JҊ+!RCI 2EUމj WLH5IaRWnx)a;A8u .cXO=_4HM"lRZj^)V@5 )"jYɰ8BJbr!$ȼQ,4R:.&r, N&!a'UlI mV$L .^kV Y CatP$Bjs蒣=SXeīhlk&N R)m" L|G' | Z#k:Tcq1\V%FT` S;QR6k0\cm7;PM \r;Aq%T*@(Mb:)-hu:UPJT V(ZeLށl0`& py# qS,`9'AqC}2Ȫ"J@ZAAn7 L:A@gGJ-)!LEwV(J892rFj lg (Ez@? }S (ĩQzX]{6RQvoUuAd"F_\\Lc5QYkzA"pPBiM YV+=Db0T{4 {xQ>a\I h+~9Eq)tl0ICQŘ@QE&i$% !pB6}fC< ت3Bϯze.]#]-Rv%cc{<]` m3LkT^ҷБJ=$]N !iZ.TUFL1L: !'`Ge %tpΠDN+We2^+S+2] xvyT/! zz2俭%0жMU> Φ%L0A7lRjh\8h|yۜ:/κ,|λ6l~QW2IVc P]<@*ih$t#ll! f=vw";۞j[ѭ)+ךBj yHYChPAI( =<6VeFE <%2`nжdsC<܈ܢzY.j1uF;(X(H Ш{di SRxEշݬGŰ%6vق 'ڛ "($'M2XiP'W WB`~ՋgDy aP(EeQFR:&ݜAn,+~x.$mDr1(j=j hNmU;3fn/fFw -jf=X(C%TT/Q1WT ZvwZ=D50A?joX 0%֡H5zC-*mP zx3XAV'X)©rltAiaӨ/B\čt؃U2'ʭ֞ JOQ!,iJ_*Ɛ t o:^#@.* 76>kpU?ri r{Ajc!T$=8DԅKo;)JN-, 弭T'3twE#BPޙ#'~dAA(ԋB1f)6dUs)$Ud]TEF(NQAZ\]g|(}DF8Zf/hm>1FR,s9.Vt^WMrLtg^,ϖWc/t|đ^-ӓ_y!OB oXБýWg'YO_nOǗжۧQt~6mX5[IIz޵2?Kg*(dKN 'zی9Gw% d zv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';Z'T4"'bߝ@kvHKv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; %'؎)Qf@@ۀH v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nv9/5vp48f@@;׊@ s"; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zڿtX~rAKMvA7\kYܬ&@I%Gَq V7.l\:ҟJԽn-%v{@[iP& btZ63M nL!̄=T? w̮˛B$zI5RԊc6OoBưv_/dž1gtQq])_`rIc^]er^^ kUͯ5=Hmpv9v/ 4~4{qMR -_dN|%~MwV t$obVl(t92 2-A\,]qUt#CB( IaC3RpHaBkO(]`)|REmjy.J}+TGodlm\hZ̾\#Uá(lJ]2"bІ/Z(cNI'u~t5 Sq7t5 ٳbL+tC/S:4DW +L+tEhw"J zdjCtƈfZ ] ed:DRk+zluE(d:@B5DWlt;׮|"Q3] ]YW{iA#ũ "VzX-ڭ13/s}_-~SWgyѕʾ~"=kJ yk &ڻ`{cll^儆A^ kҟyNbP=3Z)9iE:}`y8b1ie;TC4Cg]o ,Fg4${ ^@wK(5Cim}Ct޶Sng~Lϧ+zZ0]]yh"[a+uBW=UjOW8f phfAU{_ J7&7CW4„ztEX+LOz{,|jwv5 jt5 ]MCiڕ@WS^bh~g{laPzC+!`mQWVFtF*+$ֶк]x0 e:D2!+"ַBWvWI(ڦBLWCWVў0~&éZ;5ʚZ٠eRڝ}ܠ>X36E6ִAˤAkwy| \-%)Lpc3V*RP%X v҅F헸-\t+t),ϧ+\5tu8tc%u_bz ]ڸʓ_2] ]E!iphnlJ)Fꫡ+ġw715 N5Y{vM+tC/ R5DWطCWwח'ڭs}sL1]}RR+M3tR41 LWHWZj]CtRf* ]ZbHmKt u襎VDWx4VBW6}+t1] ])V_t7Gَ"*BWvW"+mro8Z ]\o[+mjP*VWHW8|Ct_w5 ^PFtutez?Q IXuıRQ 莚n"=ߠeZ~o⟸R}lL\{Tog)=ߠe y컫x忧a8ŧ{͇af6Dap{T[›}8c+; Td\+у80'OyzO93BȸvF+-?Vt}YH|.=>h|/˫'7J]|~_ŋP˄%t|?ƈTE `bz:'%Z {s#t?\.NVC3~?Hi?? Y~ 0_->2pӛgxcx\ߞױm?^J7wo}X>"žj{꺮ޛ%?.|,WvOzՑY7?=9J%D1I돨C0k1  ^7t`}iq3c! w]_@K_tzC/V9Y<+\Ac?xvV}H$8~Wm:AW]|x]Nkcç;e9`~80`jqٻ6ndLS#>Tn/ͺ|JJᴸH.'KkAd^@Ƕd @wF#}}03vP4 u8+1ku`0Nx}VP?\UV|īhd.6bhi5)~4a2<r<~[T9kF@['kWQjmx  ]QLBKFϡx/1Ywa:]EG[Tծ)x z@0/" =z:R\gLSA$',S~GM&] ɧBk O}mѮsA(='%&Z,i$`auK-`#B:/pAgsL(QmGs+:2,"0#ah$d6rvɻZp%> Wn֎3xвx_zNݟfCh=Rwod47#13cA:\RX>Jc,tSDEw7s@io(uZ SP8`Lt5 >*yT{DLYC(!)F@c2#Z2(:o;cƌL"^ˈiC5&BVHKDf 09w,jZj02ɶ{ |t #923B|0d Az, J0j,HI_z_8 f@`WbƱhDB/n4O_AZ`yO$Gì4,fԵ T4L/Piq>yHɛ7,b R+Jf-*D^[= ֧!.b}?6s 3v< fXtbRSs>>`T*lg] dT&k0ޅ';kǏYE'2,_}dҙA@9mTTDdL7DŽѝx:ΫU(hJvr_lƞUe5W7] 5ν>"[8r`M'&(Vug7xY׷z .f.l^)`GXr}+g8ͳJ,u$X[6 mPUSylG:ہp`f ,fpnZ^UxG˝a:oi}}}xn%M;nGu5U7Z7 uu>ɉ{GdhF[ϭ=miSZaʸK0uݵkShg {4iɘw Bhg "H{zOi4bG =#R䜓ꀍ嘀vSHDEUGgisk4٬8ˆ^FcD2Y+냉˔;SDL ` Xy$RD3{ⲑ{/ָr2q}B[`}>5D|%TϚ]7BxK_u:vGT\ذ@0FH)P+J:wQ(-Q\)Y%ịUXXhm 4wFm6*GT KIUc@C&g#g>9N=n@gOVz 95܌+zd84?\69cF\##!HSVL/Y}}1F._֋Rq}ip5mw,X0}7+k*߱`Ia*m̓0hKbh7_JMyf>ZYK8* ¼&BP50A*1jyΨjd>$OOnm3h{HPM f?=L'_>La=pd"H(^H-1UBZ"(66Hb-8(+t#@1E锎b ZPۦB}}pvPm*vq}mէo+3#Tfߝ&(KN:ϥ0G1^1<=ŀ52NFhGBi ք2,AZFіU1`"{&CDҚ)RRA8);Ŝ(arglF^-mkx|s{w3$oRs[t}d3E)-П;x;L?L,ʋ2DV2'$EL+FjwyQمۦa8$chQ2r#M`{}"3K9x҇M&)'zR/m/ս_p=4YqUqul9@O泋F4fg~cOpŋ\M󏔬|ĒuکbW`ꇿV)\M)}8j0hIbfPUuCފ/MbtF\|ټTޭRUO_ñ5C3ruЯ~}ol=mU^Xf:[Z7䯮s7+ZbV*0xҫ ]wx%"E`{ihb?\~$nWŠ iW{Ή'? tSnf !;}-l  xB^ݭqOuN8-o/ 0>pٍݑ v̙ߎӖFH5LRMa`GFecP'MiMlKzh̒6qog DDF9uӌ\r%x}'HҒ`ba)X kN JH&NR'ҼwdN,d~z!C2 FЛ<2 ie)ʭq=/gǜ{bJ/Kr敿!o#t;ou=}ݷϠ}p@j73P~Fut(CҏtTjP(tV,'C3 d`̕oN4if5ŒShؖpTCFTfO3|jǧ#c>lj() /F읗G J(x(̈́e 0W9UxS$^A?y?p8HX% g>ʠWsbE9NAi`sſVWXu0FY)f`+X/k{[j6XIO}P 52uE9,a) q;3{0(WOD0e02w.Ar>|Q+:.mQ_[<&@YKB%g}%c_\Ԑg n*$SiGxNB溵_=ayF+F-uQPUo?EEpf0j?i1;t4kSoxe.U`}qlv zet򴽵/nz9ΟXS|YsC]h4*GJiix$L v|e\t0a|&LO=h5g0\=EVi/d854A>PH p %%c88/ LD+|n6(҄H@>2#$ \w.'zB] ֗A7&B5ep;-w^Nmyrmz6Z/7eC8PD8cp!0ʁ  `dz,TH|sҰ^rI $;∈Z0pIpȰ GǁS M2kP4TJF9`Ť%B"ȒQ@RQ 6Pd4,gS*6\r A~3$@&D5; +6Q 5Kg.{)>ozIy2"JX)GGtDs@IN \C8]$r)?qoM4P Ky@THX@%PH!G +,㤢āQ]Ov(znﶀY-ɉ 3"f04Hg1JI#J@kVtXEK[>ж2DNEȩ >Y1SKy)`#8J Dtxv!:)ԓ9E`:oYy[;\]Ko6 n9r@c0rd9O[I'>k:]^YWqt5*t W+D&wК|oQMDhT4$@ByȺe:GDtV6m}75 ~kPwt=8ga&s7o:ђ>d_IL.U@+Lz5#b(kޠF`pR Kb.v+p5+D̹ ^d& $o+E2e 8 PieGQZ{, gc*b:eL`3Y*oSڨI![ݲ&]#!Ne1)T"mʩ!3τAIA,{}$Q>d1(AYku@sV9L9 ! (O!P ^ +Okml]юAu3u.049ˤ~4/LgӡBuQUnE-T [ )=;>Pq#fG 3nĜup-S $A4f(շ[C`*fsArCgƶI $XV*#@E"Yrwh us3# ]]τ`k&U^yr eFIzw- ALa$oـN|??ߌ;xF-, MC*nvU$W[+@)sG\BQTۆ:Թ,,Vp2 $IZ2 k]M&E] *\-c=VYXf12Cp,%g29qw; k2":W-tlG9) b)A&J0$G0 ( 2T3N0פٕQ+SB/"$FH%%O-a6Pݖ8E|4 ~W趧"qkd**i#;kȆ?2W^_[ԝ=hL'ŁSG^5|ˬ !&i>:Yĝ20Nӈ9r\@f|cI @~ޛrrƴQ,Qɀ΄ԨhI(h"&-2^u 8ʵMFɦvZ]74cL6%%gUe&!u$( -(Qc0Tg.n `–59L#o2:(l[$uُOuaoH5etgK|7TP"ʊ̨ TY bY KT ~Q`ۓ,/& a,Y%GAW*Adr ̼1٥li~ lwo YZӎqg37Kwg4܍D):˨ 3ޱC@7dwR.BGEcCę*CYav\F!!D)i+$yH*FT"W^ <$g$ٸ'K ѡbD#*I`6^, 6J+Zۨf7_GA8gg)Ŏ%w>ugd1@z1YOX4pPfzw쇉'8EIqa5WYfcoV^ۙϤT=n(&82&NޑRV dJd[o5C4rte 2kŕZ(XΩYᴈϹF>ay/utjN}mD+z~:8؟ȹ[+t0ً\G{i%d|xʳ 0{i~uovy џuٙw7g[a}'kYJwѷ+p4lZעzKoٹƚZ%P[:ijF46nlfu| M8zbG'mOkNqت`׷xȦVUKgNJ:RV4|~c{+fch_wS7!~?. Q}[)?~>fOf܁U:\~ ^Z|G}4'8E-?JkUH^o{}??Q<|o?{CS_L1:tM"KxwioCӮE-ֲͧwmmjNXl7ڄX,@Lv /@xߴN}Y5%Z2vC1{>҆4a9ǔc@9Z4TSrg;ذ0.U=nca7kT,mm|FaҨ44jeF b,`Э鴲U%Z_j㕝{˲UaUDi;NN;,y8({+87]^տ_eK^zOGɇL]ѧC?h|~?~pߏK!!Tp?qWҢ{I Ku>ϫl,õsڣ6釉n7)O9V8RyQZy3 S"{>bݚ87{udǣgB^=P{Z֞nv%y6]5J1T-(aW ?80L+!g\0CR蔷;τx~4uLi1hiWce-q7.4f1E`gw&Cav%Sn1EJׅcO8=BQ]ւ'6WwW'6WwfI5o$^1[a̕զk1W$cJ*Z抗}Ѣ3W/\zbj ,sUĕ|WUmoHǿ`ܻOC& 7s dE]_`&<%Yl3SdWv]+lȕFOHdpv$WDB+Diz:BB˻{%$WsLܕ\.\ѾrEc+$VpfsCKSΖ*׍zt6xQ`j5jA |qkIosL|Fg!PSGJH}#}J7xBmY*ɗwA_S2Űp'Wq۫8YVsŮmr Q~rB'MqV|iMjt+Qh[d͝-"sHi^;m|X0鞟km9vU[˺B)㗟_Kd1P)9#H-||GUq:}qnS[Tu)*"EUYsUy[smt[S^ 0NSi};Cw{5Xg+&"\ u"Jlru"v):m.WS \\9oNiȖ%dq11ubJO;Fޣ kP""WLk:܀)]wr?ZU+`0U+ZMrՎ:]r{zjу00!as=vrŔVrurhNW\))Lǻb\:u"J'|/WG(W(ѢOH.TiO;J/{:B}c{ EמnZn&n{T X+L364jVF2}2 yLkQ8v:J4K8&3љs,#Bfj7?c5XXH\/QA+) 7enϡ* am%T!ׅKeY2T̼՘AtY2 1qç33.¸錋0DiT?.r.R!`%u2rŸ R+=Ԯq(O\\0|v S+)m?I+M2!kJFI\ya\:y1r^Sȵ>xU;`%wWv\Kmrْc[kxZS^@-+֠+5^1Վ֊S* 䊀Ia+=pW{KZybJz:BR@䊁m:z\#);9Y/W#WesPrE\2rŸϺVrŴQꎝ't>'E7etSָ r1yUz8ƝJ0R[giHG_?j "*TzYyH!,=E^ +*jK%;3r0q{{_*{>_0XgWAzQf˵/ܾ__Y:jtR 7=<:7[\]"<Lf:pŋ0sl9AU`]@Ϳirڷ<^-qybKZĮϲ`J#YG8x{z:՛?$W?P|\DvR͹$jy\f j,]L' ޮy$ yuÉ=:TP+S!W s|⎦/M:OeV14a벯L/m#Àw%b(PO._E qV3oMMӣDx|XUclwun~`9Pk6ZN:m9#R|.7*Z훼7M-,٦{P8KL*;KLi),Ycrl[ ,B0(E۳ _mAJjV-J,N [}7y7MO'ʌjym=/j^{Jl ^Vs&iE Y>'G|4O."d/C52z T[bB5'@=J#jo=yc iYrXliPh.ٻ`;uKv&KJiʪ~"rJb݂C$ooy(趵(;B)`,'X=g,6blL0qQ3Lď1wo{Ӧe?\_28fhǴxLNIm7k-mMM"i]1e^GNڄ䊁=&#WJEvMS^QЫ爭v'WlIFE*rŴJK+3:v"2 j twso0U=Юl>*nO+.b1[nAfjQɗk?n8;_Z-<*eƬp0-ׂ7^JKbZwe6SR*0xPVCUh0Sϧ% 3ɸ„d20ӪλLU +l6ȝAȸ.a\Uu]e/WG(WӨ䊀)xOF׹Th]+Twur1*%bq צ3˴Y'L]/W/FEB ?ԇ:8\CyW(Uǖ @ ^ZR*\1Sz\:Ԝvc+FKHXJHFWTi\1\\)II*\1TimAlt_ruϣ}^pB1.$3Äi1ctV䊀Ig qJE֫0a^RR\1Og *ӂ\1%rur\1pBrEV&3CL#WENrU;/7hxWU;ZsS(]E\a/WO-z~sç`C}sNA~˻2Kwɧ3d]U,Rx@nǏS֩*WN׼<ݬoxo~^p"/qϬN5|UÄ7o{NT}w][s8+*= R9Ivkvv&;)R$qfkAۢe)&&%}@wc636y:6W F)T/ӫ;| mFJD3 1 '$X+9}6?+#sO~ =+-:B(E,Da[,޼t-VfTT(]GB7+ANv^#I"DUv<Eu@|%9~L6. ZtCNeqi|ͽ4BYO[C2a#LU0# )Zb]j0ע$l-Z4ˣ}'9:$ VcQP_ %4D$9u;6"+Anew0C+؉`c^~ɋo]I gKy޹(YKrxydP@Y٣{|@ںw"]U"YSb1[Oldze2V] 3@n{/N%2jk7X2实o^EVlۓE/Q|2dzd^㶳mgUvwF\,gIRe$@)cEcT.2gŎ4x4Ybѿ nTW㡬ɧWjbCBj- |sfX(6EVyb4MQ 10ޟr-X5_uwfPUw㛢^ޏj'/IGQ18/qa8z#Fe_0+8j/ 󼔗aKc0ko0rTSeʚSg0`VhtĬ4m%NƆM^bVt ǗV3-3 c%xr/0+;q5έryO0{`=vGoDl#N e&^M03zz1bFU`Dd iRs9GD"Mu&‰L13{Pvjʹ !Qئf(-?t18 )fY'r)-8qAJ A24& V$MCTrF:xݩ~jȣUBLQ5G~n\s}^k h*6ҠrAd؛ B*"{룧@ zǥA樾Luf6}T}Yw :$] +wɆ>.Yb9f݉;"^&f1p ht{{ ޼ R Rt()ݎ-&Lt1#d_p/0x/J61n,odžcV$t fW&g%f NcVpt;PvJUƬਛ NM/tؤ#fUk Q{-cZWD|> {AOYBp KnhGhl|,c֛8|"V&Es3oJ3+8KBV8XYE~$C??]|}ȉc 'e)0CB{k4FR$ hg )S:rLqXF?B*X4]01zIVa%q,"fٽD^O79ı2&1BnA/qV{5 Q㍤mĈca'D쭥q==\jM=^j^9S#C'G}6ډ9Ә -DԨnGD$:Ifq-D1ȍiC !>Bcqxd WdYǣrlGN<_ҭܓs$ 86x$zHƜC);KTI>}4JDuUUo~[䟎Ԋm}O[\!|ؒ)W }RrڗLC/5R /^⣼P5i CHڛ[|b랕Ys,趌&*ci4^|QJŽ80c=30tWv`%NP8{gbMrQђJ 5xؔH%ǫ5w}m%&(E.bF1\չ8!"8J`:#桉LOn'ñC 3ju,'Pd&Ӈ/I&%D HHKM.|1zIVСN{7ۡ`Vǖ%cr@Ĭ.1Ō^q,/9arϴ#Y q=\ +Ǿjʨ91R]2(I47]2ceLd:Sb+>ϢIN̮ф3WZo|% bR6z.rOr"4_D_fa<)]-D'j@: I^/*t,j`ASv ĸCoHVT`"JEZSxE }cËanxqY'.[T``uPn*-@Y q,Gӱy}g$JՠÛ7Eכ C4 -V#s0.Fzϋ3Mw<{ uZElg3q2V@?|3 Qm}Hw_T)/hGCw^2RƤAZf!m&m 4<} )йLQ̖ !S!~?̞p\| x+Z ןg\$P0dAhitq|'A\1D>Jkgpn^Y9^є'JJ# u5oinq."2$9:O*hPs$L?P&K Zـm>鯁|u[70j!l7ŇT͠睼C~INTD(:n SՖ8?vԟ:i{ !URtn/@%ܒi Jx[YA"xkҩgJ^9)ݯq.TҚ(QV7 BBr.Z5ZjW&!Mnu \4Fe)P x5jZ][ zR@%kP,bAȺZv}X{ƟknX90 ['6̫dyik%0(bn+8sKIjn'0fރNQ7^+mHߛMӥyh -l knİ^C\Sv}Qߐ2Tsj'/q e':E_'Ui4mIy[<و.d4\n&Ԗ߯.$h=V5M䳸Kʆ04k6+B֍h`= U9ۦ j%QeqGd۫qMx恙c}YҊg|F]ξ}//<HC;JzUEQ}s.Dyb[p)TK-(1QQ"@FWxݗgS!KAI ԗ8^;bj eaboJ%mt m+wjgNf(]#Ђ""]+첎pՐX&86 PЬ>2#d,sݍ>A&7 8L%iO]0. H&Yu2lJʃ=zb.Hraf@JV2xNG!`]0J:& bpha \ EWfs*9a7Uט\Df-fjZm>HE,uXBX![J/U 0XI+}j_W$T~V Z#խ Z=V"!)|B+AE9¬R2#[ΧK$֐CO lq\Myvq_.gIRRy$-ٿLTG~PE-_q$}4xd'>WqLC^5J cE %u6}Ni|薬0; ͳJs.2sM<9疷4@.J5Xs;6o]m^~_Ij{ԙK^Sn3|Xzل>sHLfqCS&\\%#⊱؟=s<.v_Mf57!*v.UL9i\N|n6U..[J՝T,V5 I^[ q)]neI}Bo QXkRFr\mvhWOb >VRtp9kZ75))$OS^hI{j;cC5?jhLI T&Y[NVJ8B~Aq"BUC$^C$tɵ@5(is$P=\`>ᦉÒT]HK-Mw%?r>{0٤(",!"/@ nA R1TE@tUS>?#zDУG0].pND qUutRJpQ}*v4jx020C`Вd4IҲK_A^:h MDJS{v?WlfX3T#)(北3F 4FUcg>˩x'Sb5]]9ޖ((Eƹ[FS&Q#D!뱦EcM?jxM_Pjx-a~{uyț07H☎;Rü\4P+p±@S?>?s.q 'I(@`JYjq9f"T>^>+pDVc3%X҉Syp5UEX\tiM ![ lb%TFmه>i-bМ'ހĦv]Tt~WkHֿaU&J۱,g!N,OՌZD[].ê ~9EDoI < jT\@dFjLk!2.NG84gS05bXI3 ƨ@1+ AU^0AP2x!4Jr" (FZ-G)dMl$ tN_ {( xcNN)I( 2yF.Nb@}KZ%hlU`;uT0:*k@cf7DTw.;(1[`tgj#Zju\{͐"KԊ7N 7b569'sCK+hZb~?oegrrCszS]ٍG.8\}gQA1Nh oɹ[$Zup3V6S>+O+rǭuw{իc]g 9j XN̡FS*N6[i} <_Ze}L&Nڤy"k| - Cf;+od4NەBT82}z~l2&b𺃚v\L\Ϟ-wL|uMs\.3jWH,ES&ӠO~˩pҕv==ޅSa@30rvEq65ckp+>R^xT,^ۗLDd3<#NQ#̜ݒ^*ԍ'v+ٟ %#:pz9Vm OHwr`iabw(&7\ r{/E(|:N<]~tN ~)bsD8|ny~Ҁ4LBhѲg-- !U!$p46s2ma,Ow4gybW 0ShABlg3/tM埑/ʱƈDiP.QzOƱ Ám ˅p&ݲ](Z?3QOAHڹK5 &n]FnD똉-rpZ%h=糎[P/>dD)xFMmH6Iw'1g$Ù]"dHàW+l_TL%oD<۶מI6v@;>נS $>b1,a4hPW4,fUϰKG?Su\8V_`Zh1r4/ =vrQNQ+o#[張R2Am5]ȕ:D#!pV4.v@Ȝ>deG7|n*: \?M}a@)pi&?a L Paѕf>u}Wm.UoVϝLC@s}|wDJ0;NzP1ͺ;344@Q5sj*'sx_WA6A*wBJe-Q+=Pܧ/›}OGO[Fs)$&bTrr">'Zwֻ̟FW'? LhL3Sb<DZw[8aBtu$E^sc2N-n͛?7E+mus\k4U_0=++g 5.0Z}/S 1?]<FeETم#WBjbK5aL0I|>ӔDž8uEHGf<^y)a;|xUܽ&*˄,EYYmܪspxP0ڂsɛXިҥ97E (u2W8Q+n/o~gZ(*/hZadZ~0.Xϼ1BQ91f;pEI&`;~yS,hwoi;15.>GaNρ7Hty>^vDz&YvoǦks: ׬ߴǞQ4Nt1cY1T IJBLou*֩i<)h_K۹2>{772-<'ɥzԇ3=<"JV*P5?R1y[/YmIMJp@1% 1?yr]26 emWKj>fy{mPcsUܠ_WB&FcG8!]=i*7|5S4 R*T g;*)\L~Jt~Ihq{vgRi> i H,@k(*8cM/6T(l,+[thcTmb2A 8*T3TM|6/a\C*IzFB:t{䘲.z sߙ xN8͒w{cUI<~>'PDtlfX3T#`F,/SE5j2_)ȅ-mc;xM28>d4uR:^|JpQΠzDžWBY hgp@d_}2]Ãe V;Y5vfu !5[B7 f6EڦQY6Tv լZ ]kʹpj<`5@6& nf M0$oK )JN[H,EeXF,MJQ_XA3_dϲ:l/ LԽ DxQu\)wqd-/.N%nXUN}9*ɭ8} h6ztaVd*,pHҏ\xie)B}HUtfʠ VFH|9O4K1;=%43De*t=fIնpP"4¸Imjag"MR$Nϰ\p'UEX\w/yUiӶ `MFmuk_0/s4ΈU]M39 % lYն }mJ_\{˽/[aa.l3,rH։Q,&OqP/mR>[u`>T%&||~t >Y}H`Ee[M?ňiӱzxv dJ7DFHO,}Y5P 4)Eu(^9/xLhb;$%tadg(xtAQ̦$E0ԼD@ehԯIPC>˖]NVPHq<LA+P() Gi3i V0,b'lFOھ>`ɞW(Q^e~[8ٳXY !U!$p+˜ma,9jY5n t6T|2%zi?$Oo8XV yT*gzb%/PfJ*58nOTlYku$R.t'iӈ}?OPcDQD0Gmݝc978:OscӉ剢Q6/ ܐSвx=ښu 9CpiٯYD/>d&i;Z6*,\&yݪL%}g zƇq>\̇> /TN 3d4w;Fa%CO3$;6 vFJ5@Y<^6*Lsm"aq:=#KuH)/Sy=:6/bq6_1@RͲC)}*"^WeB>]݈mEeV쀴큥KJBPt1_kd>VЕH"׿ߋ^O N{)s1ëCuݻE^fIӇK1q&3Ŋ޶x5G ״@Kys7$?@HM(NOHߎ =MUt'BH"vx SE*ϥvOR 5Шmsv|z@kqv/t& 8(R!EtuZpxj5"3ES5[2$i̓2r$OQq&")%ld51i|S"}ԞOceĈ妀vۏb8lyHj4q/enGjؙXNj]A5&TIJ>?¡?{Oۺ_! d'|0`,v3A^fa7xڲeIi?AV$RnlʦyewUwu]Gm(%ğZUwSLP HO9k}~k3@@Wyi 7 !J6ʽ4hI#v yco˺n|;5겕w] !列Z#[J ,uyVhlOY~S* ¢m1 &$;$Ŝ N5#$W) ^w0wc-8Pıfga&f ؽ4vnE#i:+n:~D0Vm\>]r6@=N!XI?u8|J[B?ż| PEwI>DY ̪sƲW Th=a g=6jsdnQΨ#]Hq]ߣ2ڷz%b|%Om)Kb7[<>Kd2MB2BmJEE ˮ٠l9*i{: SKPp}$ y`t=im)Yd*m!  .4jN)dv5ʹ bH(* 맥A7ӧb@fE3s h4>;GHmj)lk_]8piF=.uQ䧨F5Vodi@jR IhrcTcfR2cAτc0PIaM $ңd%y1O:g7:K^)6#D] Wy\fÐpd$DQi,# k("oe\$ RbW?Ԍ$դ:Y$I]R3JEcT&c"tTUVp\hx'T$~'KQG[,T-$z*QU( ]@ GͧW{hE t_\*` j+ ›-]GC5CFqi [DRucfЁT!VzoOImЖ7xWZhQ>ʮxDy&T)Ҩ}YTolBi C'(C6Wk?J┓Џ9 }7F&;cȠ;d,X|"yV,űR*| p0%9šT7=]l| x7iŵ-= یgˡ,Bz7c ёKucuNLXI4e%Kv}HxdKGp;b[r&hq nR1.ݘ@~4j疬hO'FVBGs%`,vaĨ68^: сa8E `rw0`81w@u˂1v504FD.}#WL;0qtյDL8+qֈ~^O7',7'>!#MۢH3n[ (,Xoyyq/ijG"1Qw ԡB;YK'.Hv$ /ęjlAq``_5Wfd+nҘñyfޠ2 4lE`7ў Klgȫ<3hN#e?4=F!4E8a~U눍B~N%c)8ɾ})Fڀ(_;84&jTD Δfx_#$iHS~ЏF0+`l[*S)a4bF@QAn@,p`fsû-22]({r`gqƺg:@i3 [X ׎@RrЯ/` ʼ!7g|(}ƹ7A#X Ȳo.>|qt}I"1j>8#&C/ia%S.Ƀ8ȃO~osn_r4M4JgD3)b n B!N^!~C˗rӧ&bN! gw OmG&LۇQ;\Z#f ^x!SJİ>^ZLSxO-GMܲQE(w>j#PźWsX`F B&#"YNW&&(iq`{)}~rm&#U(Xa a<ܻyU% ) y-W\x>?WpLfzm] &x+@P `Vn6#c gٙmڼR6lq|Mpܩd S`AO7/CZ` B/w_0pwޟ3='?Iw7xzZ LUIBdvi[ªǽ msY+Fv ɉ.e4(Sީ=%t]3u}gWnu\VTθҸ=,t{b^A[]URC5heU~p;Z-hA÷Ɲ C9R3:$+v3eó@SݹEmC f=q99%&iڌ]5_2o7n:;˃VaK|DIL`[Ȃhc.څ >_*X|62 ࿍?~?>Ɨf,]o/E a'5-*/lV,t8X& tu]{#柟'ffɬXS6˽v8L*?eo& 7IX ƾ`Sj(aN#)0 )Jit IBAy|q17мt0mQ2z<9rd- W``.Яč~\t®5aVz_9q4wԑ)&KKPanOPjd-1z󺻚=N7BK'nSN^ORҊn@\a$;F|yQaEiEe1 q1嘻;tm鯏x2R"j)ՎN R0j(1o-1ӽpۗ9`/0^p|́Gƥ5՗t4.rsw0RZb9K A$l6$=ihAu-*0$O}_: 0R LTI0Rh۞|6Dk[IC+Nk155g$ȯ}~y!emV ǒ6؉D=،m,EcZ,c .#&,cd` ;I/XK:56bokbz^2 -%oR50涴'<`LnOKfƑT%gu1M(SRa;nci%yZW3.b2+B6JM {5dʳ8-fǿ}ᛋEԢ¾ >Rgߞ-F_{0!4$ "%@HDG:#PFaM,Xr5;Vˏ3qa͙7lG:3a!KXBpqT;,5v_Bt,CJꢳiz>.R H)+]L`kܥmr"!I%U<4xpѺm}ѩ5dLO3[lvTJ18^uF0$cQhUְYh#3 e:<ή@{}ؐHigH>G̕,Ta!V>^ 6K逈>Eɇ!Aq¶P<jz7ڠ?! M򔏖IݻN3=k b"(%r&YBdN6҂ںć JJ hz-WӺZcQ2+tZ-X:TLff|GTeK&Xg\|6^L~C70ـ[0 h /\J7iš0VM] Ma2K׽1FvH솗?ד41U=B($)@1: ʨ6k i9(JM[1ץwC6 NHLQ[pm!% H\LR)MA$Q*KG9=dUٝPT}םnf~Jxq{dgT+\~w6y N؍~6a,`CydAx- XazE)(I⬭6QS]XNLnxS acByosWtAS:{}="-ա-(d| 3Bd~i><û SLb]bIF*I[VJc rVs#h٫UXxv |.ؐo:*Ew|&EJRH%+J:o\J`/Ͱ|o իe?xe&4=|ά>{+F(Js8 ]zv}}*K"\*]AW"*edmgvRI_z2̵s2ѳnunu.䝴 wBjx{A/·zK˽-n1؉/l{Fg`ҳ- c=/s3.n޳Vmq FK?<_]Mpȋ3q@d$$h*A<I{O2˧jDV TD]_|/ la[ \ $>N#ź+f6Y>)T 2>D.dv+Z ]V hQ*f $w=]Վ:/Ƴz^Neݸ2gDwĀ }wzo } |3!cYRG 7m!b9_t>q՜qz+d1N&Qu<`XY[T>upHzuG;I;=ZEWA3V1R4>qCw[O3DE7hi/-`K 0Q-ӫ|)7VՀY9r\VOFGyqƂ?Pg_F J1s~:oHQz9Ooy}㯮:YUߧށ2LwOˤ1?6(Yٶe*q:_㯳1n/*?FW]SQ.W̍&8u1ZPܗqەƄfTf@߇d 6hYR z * 4H"av3[?.y xLͪ( 8#LQ/h>tC ٞcދ}9KGBA@qW˥VEBl)}.nHvO@G pDց$ѐr:ψ24/=&rg5`dX읥o]H58;BOʀ6z|Z<r)_,*04- 3 -Hg;a>&Ontr{:"澎^ۜ&1ݰ"9{T,~+rݡ[&Nr<V<)*5op'V>{I̱^ViTE~ Ma2:CP$h],PLx+$0ؕ-5@ n,#@+2W{0a,"ME\ $̓9) ꞊oG^l,Ws7Ej dy$zlWW!4Zs#(52D,Av /62 62`Q+3_?s`Z] ;s:p!R}8GGr~l *52teB&R H{ P^l8ُ }8AHhFU^ W[Jz"xZyU%y3°%W7gcH:1r lz@JڪIK4_6A}c=pt#O/YeEҞq6Tdd3lp%@:ÿDHAG~`:V7E 19tZBHi^}dF 8ˆ\,Fo()z\O,Ia$fɈ6 {9r yEwBvej :21$^}\/Am ikWՁӰA&pU 6ژ4 EնB!w̾^_36Uk tmf+ײFvZvmVSY+k}>CO\WI ʱN+淨\q⹯z`,%5اdg"kQi`&oU뾷LKd`Ķ}41 ?-8•Iڏk(P|`;L D-jY)zNc.aϊ;&bR|z20lգO$EXYw1j*M1钅K5f7ab$ņHh,%T[JCXAlY//2 ڈM}Rk6@'?`\PM$lp#E@I{UD0 ~"W rtx{>RvPݷ,[aŪG+%8U5ZL\,U]o?/,sVo\}/j28/v\t߹ ½)* |cP1*0Uɗkqd:I]mN5:Y.\oECz"^ufNB60|f Udn(iҍ~)r{d,EAŖrz߂ć@A12HTXchay]eYPU@g~B {ͧUw&cyR&} 00 ` |$?@逷sd„" S pjjpga3a fGbBq6 (~a߭bM%LCo 0nJZaˁX}TRnOR1]WWR,>~Sdn1۶L5U2]8r]^p^K/l7_i."s#- @|_]gcT^s0bE{V)zyˀSq=~woZd]v-+?`yU'JﯪzN~>||)CL)]ehre+ Ny85xwN-bJ׺n}eg"ݨ=_; 2"4 2[_W 囻5a M?hmu䛋UQ*Ҁ՛c\A7?fƹQޮ z1S+[uHY^ه{1k (f$b)Uv s:487m`ĉ #@ C^gd>u5WxZ0 Fպ2;;GV?3-PXX.՛ڧ矒v>@:qAh%.]yB?sz I3k`4B\2DNfR4N4ÍȌ'0[5?>/Q1^Xƣ*+ϣʤ${4,r!saμgqYxaeKwG},uߥp;T<) jHX}w8.\ĉ'rr*L"yQp+gVBhYz:펮߮P>4>WWRVf3 3 ܦ4"4F3o^@ 8*/ҳƔc=GSKBK--Z+zQ5xThm@3BA;CL29XP7N k5&+y_>Md~[F_Ͷʄ|\@UV{q}nri _bs-5Om0-γFW [5hlD N%B1[է%,tҠ7_n~գ$O~))[q^W2uٰ`nOWW:r6^٩ov,Ө{@N{ONw1ƹp-k~߿t;?>˷/2CT Q=3D 9a6K&)GQRVhŏTmϫ\ s)Tϥ+AE-s)vh=,84=L+)4~4 {f<0ZnU8pv~p(,jY-x9҆sZc鱄m'-F7Bz#{ 1@8(@p6ZtkQhRkq;1ʦ0D( yM-6lNW\jCQg!1q6l-ۊMRׇDZ"{Lxw5*GEabT'g>2Sd6ocЖ 2PBIQG􍨗L4D_9Ms;1 :ջ_XN3E|Lv/\4G?jAgf B"gٳP;d0n sFbÖݙ&n ,j{[eAES*I9*l@.TFX@7@K֙zd}e(~s(H!a(Anfm2\/r[T@ĄUH:sRZ(dRFjaWz+o#ӖWH' ةÞMv;; n 7숂~갩ö[`֋)^L"( K_]"tbeO)^x ;Bh\߫猘qFwgimǗ[ej/ ^H*k_}pdRq\\F BP|ӫӃ߮ZXv\۫3>7}l?^hۓ 鮧L˗o/.SDC9j߁/9zyݿҢTx.KH~=5Gtaҿf ^ >޿8h)>t;p938]vwè`txFaw|3Ć+M48.NRǶhHzvՔ:S }29T?SF'o=@ec :{poNg)L0eTrgG:dy[Ų}bbj!tHgd:[ܣíwEغDJ 3y;[k7>^!zO.^]VEg}ÛNs@!BR's:LZOT vJ۴Uڲ-=[S:Oߜl`ݟV 7Z4NM.z`"$Lp||~2ҙ^MXjtt@9Vż];u7^<%W4Pvg`w*9zUVNi>r^/Ycr1-Oӓy3 ʵG#Mdm{yH z!ΔW˜4 Woyriu=e.JI +k89y!ITO`aE(ɛXa7h$~]_ #xN,=" 贘amϵ{-ZoV$Á߹VD/-iBڛf1׊73mKgv Sܚg֘F뱳tRi"+?σ0sy*,U#܏|Xp<+7P΋Uo Y$&3z2/cF !?sz7\ns8I6`* ݅9}YhGE'{p-7{yLj7bsc^it=g.o Aat\]&ԟOǿ }?eߪc:PlW@&|fňҀ}`. }FYݒ) m`B1[\Rs*:^VV~];BŇp^A'$ė}( l!pT;jc1 (| :eB!,_Mx$]UfcN5 yhށޡk501<5,h9Y1[K>ga&lŭ2׀T|MVtii4D~gmh;F.ɟwX DIUٮzGeX^zZgh-Imi&]n&m>Ye K1Аw6x`¹LڭL"S)1ؖEN1Ba`Ōur&Vtňm !fw64?;`fC>" %4"̓w)27RK4+U,)&{!uY7d,5w64x#tlNn%v0d$IIf (.1D 7#DklV||+0Xn"D] 64YoQPęmE9 + :nh5N";.VZ=Ԇ\|$4;Q|OLU([Nľeҹ yh&,y2hc_0J3A6"q,b!*ɘL!v9inBI>XtvцcJ%RK1p\Ez+AbM٭RJDWjbxaju,[[ P*%arz% w, 6蛈F:$D$d8\m0D8hp7uwv&I$-'DMAFeU ɯ3BZAjIPUhЁV=d u@P!L]ph>b#~BDV 7ʸJ,]r K(:42|Wõ&K>C2Y\mѠ+EF, yhF֠`'XY#cS`%`7>'4dd ) oU,my{MYxҩ #- `u&lüĒv=}L`gyYb[UJO=͇s|IJ(ZV)|acHز`vp?ť_tȓѼ޾ywQmx߁`XF=t_=|qxzʷ)ߪ|e\Yr G9mrd4I?6Sm\/ryϭw*y bBPxvrzØJL+ U *A bD:۠kC؋rmE1(:ZOe 'LiИa? soOqtf+0rxb)ކo֢(9t+=+B,ѧ~1NWyqO TB(rmɅKz ӳFeltfJPw]S"> WwB'j1Fm-&3cGbE?C^=PjCnuvO*9IrJaT=QFS2q& u#C[O-\{y+M_P^Cz 5jYCGL ѧyo5iÖW?W^zZVbZ[:yG[n T)M|{a᫚nkUV[-kwh%.Y[e_ݼT!CPT6q$ȼ'%*.Eb)CS]XFdu / )ZÂ92ٽw&P$xx&Fܐ=}2yZI '6*QQG-Kyv`ˈ,,^~ೄzЎ~~aҫvsUaW=]-swUYC&mTJL*&U$\섯1ٔ5$LXhS LKL Xp`tdT와Dĩ/.|B|7@<< EnTŰt$vvÊpȎ\`!'3i37k-+PaA '$ne)R}6P˻3dZV&1 `QWUxSbqʦwR5Z8h$h2ÆlUaBzf-*qfք:֛p%L@@0WztezH_1+"3 ifv^A>`Kd= Z(K*Jc˅6D5Y,eEEdHP 0M(Gh h҆؋ə/` fQC' p<kREC%\w Ej`%E'ζhcoxP @am҂&) gX^M8eҹHO8ytR64ka~BI)P"ΐ@ 6nB?)3RI@Bȃ$:-lVk,R)P((RB rTQk Jܫ8;>//_ῶ#>ֶcdZ7&`RtX`LXZb rl@]*{}~ߏ(QRͣGI5ڬv;eeٓkjZGӣ7_Z(i=#jm1l!wWqWP aSUQe0% ”4S Lm6B˹-f2zTVAUN &gEu' %0tP Cmv˜1P2Itko|0Fs>5<jyT( V L}daȯ^9!Z-h&)ib % ઋRΙ= %Rn\OUL-Fo|uM.&127]@ ]ܚ3 +ÚfQ@79AP:ͫ,:/A)/uq(l!^)rU[f ^wW$ԜUTs0xKN&VxnG'o3s9di#4@,*zᣝZv&o.:rlL0|К:3VR;oMgO]Y"wc K9-#gfz=n}nlR54pIz.5g&m"(?zo||ʊvuy8SL<7lZ_;[[ԂOg?#~&ቃdyQ;&v,/ٴ N/j΃*pSY7 pAv?3՛E;fW۸yf ;1yYc]-O즫 GEvU#lL[݊*,}_c{Sirz`lH@aeOOlOu^?0ݡ7:-{vN Rs |M5G.jg [NtZv?:6@y*y.:Cymз-V<7EdGt6*߯7\!K 'l+-7f>vT}_!vrwn{fGp.LL3n5z-dӵ.K edu=FƹZ,$*o;΃WJ_ޟ_-,;i + ꂣ \o0C<-<\ |=uZn?ޟz 837yk6[-Qx\U5J%܆cqǬv88cʐbgєc؏FTG/ª4X[5Wmɫ Tմw #lc59gMu@Z%E^IC&an` juc3tߒĜEpB< K]9) ghV˫_E@+f6_/_OWW|c&>@ .Wv<rzv~)8 l3eKvJ̐oz hK{̞ʮ Ar{{82DMꊒ%S(,kp|%WM!jv 4/eC"Ǜ \ aCc:@}31E)awdv^Q!< P2C9c贙#/P3}'wD^3-wfڑ},wKN5ۻ_-zתN p5]TT:@/6I02s )[H q&Tn0ltc$6=H6K`Fv#vwĥ:@Oj-caV5L HK2 \?dfۄumEGwG0UCAZ\ۓ lo4٧4cS#\RD۵R2ړ#D !ƾYF'3*haJ<;&[F4gFl41{a.`ࢯ<6b5daX_6"{ X#oe&:>_W[ˌb+P:|ۓK9u/"Mi:'I[4)ٚRaG)0!l5@?x".*Əi@̏zO>)c>P0:kK&) mR%\Q6:5m9:x$bֹ[ѵnFhw9n% 7^ɒtɭ+w&L^I; EO$)$juj$Eu&Gm[#`kV:w[֗<Bzkyl!8aWB' هk;9aZzȀ*. qS9rCCDnYAdu0|<=RM"+4o&3IzrRbvJaZ56MB5Y"]8y_bTݜ -'v|_^ZCk)1W2p`IΪ}R&[kD4-d?ESt$90\-@rIكSlAp/4'6u1-G_?xl#$g:xtq~9YfT}dܣ< G&JOVj zEțxio;Gݝ} o“;8Y'UwqЗK d]ct>. 8: "*4 =7 ?9V~۟uŭQ- gVUjz#L)t3"ŮaZ`>T8Zo^Y3xc-yX7Z)2z!ߦgJSk/K7cۗ=͑{*q2L$>ؾO Kߖdt4GH7ʸ+AYv2@y5IժSDKmb-A^aD:44>$KZ+MWM ~]in_Ɔ[_hJs\t"dU%|[v.?} bFW 7Ezlm뱭}3h6bfU&fڢ0RUPq$P<ʧU٪S]OtBQе^ْPL[!J[e0ȡSf7j/0͒tݰn5Guzɳq"Ft9C !u"{)Lsg0܋Dz{?c_%sxoՉ(OTm53di*Հ6IZm]S,'s%?jߚ]yRI>~g5H=zHh3'|92#sB㦺i܀a9uqj^ :9G'g}XKႧUx WF=K%XPQhٻ8WQv&Hا݈Ǎyۘ$4۪簮:{4$w;)yA[)~$9UcC{̎j xYX~{fko!Oy1hOgE7~TG~o$ub+(@pO S5)@=fˁGW0SA~'C[BiB*JkHWa1 OamrdD̔a!1ȢnJ6socl_V (ȒT%c, j&*a$?Mԩ|33uK^|hbrfhp^Sw,ٿ#MۋhL]Iٔ+64oY|\͕-n&tEM O ;;:.ș!oWEˎO4Ie3M9r6]^Dj^o}/[`[SVJZ *F ISa4Cdr!@mפ֐C9퓹Hw[Ii{Hq)YP=QPFQuYjLt4OU%B0>1C+\;^XzD>G+ <+YCk|[c&&s{%NGXi [%1\K@WⲄwƔ{cŅlOv5DIbvvW@|__|:3HѬg>>IF3GH?eClW̎.Ù­d_6kJ=ʮ ;4;}ǭI7KmbnµqD b^/z #.^YqiMXbU&~#g;C|.etP{[8gpvyn+oL=#AG/}#x{.}nK#\А8$b8疐{>^)Ex>{N;&<21ۏ:4,jNڊ;]nU,TFm \zfCF|Ne\(1Z^u?Ԟ)uDp#,y9;&vB2p|rnO2 Jp%#"ug*ZK et2)U:4$1(C.j|:CՀSJHɗrvكd6{.y9""/Q9#석Y>rqC7L>^b,4υMzUOB_p=yLzg!R~`9$m`/IߔrnO,rP Y u>p|fL^ ,y!<7NSg813sC=L+x7$և?`4م{fa0$Ij3}Nf[BI4(\`)2Oٜ*zX~GB:; ^8(AN?!p58 FVpY+1j`xyj.BFC٠eV zjNo痓"@%/1yAq3ž֓M-!? {*SrfԬ Xg]p(؜ɳcdyUޘ n0#:ۜ/GG=ojNS@F~${=ܣy#o+vAF؁0㈕nnvU : =hg@;U$@;`6h7ߢ$ }Ύ12hz{q975\ؠ펫>mAq }@f>0m6hAY9rhng&+v0ء9.ڠ+v"f4evډSshTL jΏc6h7Y)O}0q=';h/i 88M%s喙Nna7X]֊Kxͮ~ZMI4kSu5hET]Z΀n^.T}3k4ǧGjYvYBA7':8BiA7X,Noo??T~rpN\!6mi/muF&F>a"a4]REӆL5:;NE͟*g՚M:A7:M iD=C$Vދז` Z#g*? @4\ɘ 2֫yɾSJi Q`v?{\G,y9O UQ&dwKș=@t+ڏ[pQcKo){"@aUg& i n )8#n㎷j`qՠ;I[yr)$(Ԋpi5.EB ;,ƐĮ.kl? d$ 8ww)1R1ۃ<0bţswLY7 q6Aև. L9P&-3F\gREDojX{&MNSǜ LaQAe/Eyff<ר]t-Q2Mrs_Dhkv||t{C7侾â#&i]?#q&Qh UKh쨖cȡR /ѕ@Ere'>y(f/]QV+1\ sc,g K زs!R !4Mn?޵| ?8>!?~qgfŋ1;It34j'2nɎlNrm#|O(hbQM,̌u3pCyZ -j̔͐zl \LU\=8|,=3O5kl ?`k fHk p;@+ ã}m oϸEspcj<=ᘎ}A.y1jcgB16 8BrfO10 & rV utɐ` 1+Kq8Do&'Aũ)>% iQmgrMCf-)Ĕ{mvg[sk4 2'X~g/saDy"'3K3i8R0eTnД<B[ޘ| !{6(1G$H7FȰ{ @yMxԉMGP^AO|D,x!2 {X tNv 8'9(knZTG2G=K}iO%!&4E &X'n-J.z-N=ٻFn%Wid,@ >IIьXd[dmdImْEQZͺ j\33 <mȚ4:`3Pd6d&G2|ccJ"SIHOSzϻhM6X&@/ZqⰅh%l o,)BM̠MtN1!B@E;3r!&4dl=D o-EӉ6@A4ДP&S(h΄C"@YLu"ڭ3M^S>V )+BON8`^ 5r y:Xo<9,3A{ _) e8jJ2*l>FM.=:?%HMfql'B3g!;͎~? |3tp1xxu1,? "LzT”}IZzhlªh⢡h5Itɱj2'\C`#.OH7Ƃ4'VC8%il֢irܺLㄠⳊ6S?hG=Cljd + DXSڟ9sOu| { tXw864W}l B?eNhS>=ohC=>[i7ZmԧdX.o]:U||{˴uS&h!U!g|1A6pKQ/5[-RfXYEY!x=k3ꗒiԻJw HP;4"oiˎl )I=tR~I\\CGss˫:#|`ʕwxՀaI6MC/eLxkMj(.צbTLq)kUxjጜkSo4:jh%<,PZ#- ((۽J9_ r>9,(HaݣhJH ݣLvx7ݧnӊɇp[utA ,2}Zۂ A҂[^ TLp쪳t;њU!gb5ٍ7Ua4WĬV+zDjLriX& O2P`4܂5%blm+53krS7%SZdr-C $.ꎻFc>~y4Eo c9/x;Û7e J=X_~>k;tMj1fFI3rgџ9)13T+M846WX4 R/᝿ =Ҡb=Fod걭_NvO~01 qn qKDclH/H( /h$KE.69 Ny r{_:֘,;~6`Rkmuwm!~>W [swͲE|l 6.\};yE$Sg-Ǣ9Zإj`׃FF)h@A:gr| j`䭳)bq4iɏe$%9~懿&Z+/;Im:7x^!4IZFKd8'aq޾t ia- ; mXx+S]*dǮ/(/Nc3nՀ:Q[y&?Q841ic_տ.k)V>< Ԝ4[)ĺPiH7?2OەѨ i_Բ jPooy55KzQm Zv;B6RXx2+z R&G0Slkˇ>s]k0Crぷ ˑ:鱄on(%WvI%w@ 瑄*#!4FjG>8|Zfh_x -%N/h5bԜx%RBV&NWc#"YJӺ]Ү3l-lE\0tV"}g/o87,֕Jf+޶h)a_UyӞyy 0y}D{^UqD=ϿO++3Ӛ{ ܑ.O#~rv'X v_o8IZjjT n%Z݄g5xwrg]z=st}yAHFdA@28BL:$eRqmYD>t@* 0e0F-VzeR&rb_N]5h*Qj*` t2hq-OLx)H8>:PpSa-7F}lj!35{@4rЙ4]M0.il^d c[P)iePp5U{ukrr- ;M+j0fs *:<[ (MhL}n+9orpDN)H(T}Nźw: ;Q!W`L*s^7v_U)S':'T:pb-E{)+ejخ f\!vm Ї5fPcsMX >jfX %QP:׶bU˾I1VhЎh YZ"7k dj-VEv ,b@I4mm$a^BV8MU&Ҧ#kFp!B+:eA[<qQ7uL"⟁P}Y&ӪFqvޗ=dLC~.% )fj6\AaC~~3 !lF gg@#Їyh>i֤{*~l};-v6LUfvGܩABU=諣, CX:-Q"Lb9e"綮bB l]fDF&NyT6P]^eZE9/Ro%%k=&Ǥ Ӈt eT Mk^ocYd7 Ōp ϳףird"/{`~~85fv<#Wɫ? >Sz<Պ,@_voitq7B&7@_G4vx?pY>룓%텆OoO_ފzMo$E Z1,%Lyu!m(Pe:rĨ&㔪:l D_A=kq+% r5pO}iatуFZ#389ka1oApv[4mϯi-}Ȇ)rѦEGg8ُύq(0G9=F/Wgl*%g!:#7m~#&D$V0h5y_y5[N2Iv/D| e7s y{m9ݠUXJLOOL 8 (4XDJ2gĀ9/SQ7 ;dj`|m Ӽ 䛎)m6|N{6y鲹 j4KWtaNWta7]$ѠeP凜I#5`w1-99C3x OA JXKxs$B*4ҝ\k8xr3.8I2_syujMӫ*?ʞb*4/yZ1s\zoc)|zLj׬Sf6YFr\]\\z쯮k/ bd5uYt,|;[&(ΡMYN h̢Ȅ}2c6jڸ{3y ‹r:fQɘE̾9_IGUS|GFWN eAFQRm լc9pou).hCuGЗΐkܞ##g"w\**Y2 6ghor:I3\<^3>Q7KH`* D@7 "h]:r?po 'O27hsrpqu{A8˷(CQɕo_/aD 'ӱvUJEB"m1.0 SqtA w)Q{t@c]%9BNdwZ'g'aIIB qF9"(Av%؅%*cdd{ft\Ey|\ƜU7 f~E^<+]k%Or0NBZu@]o: GMX#h±%PKٹY Ǔai뫜=6+7W w3,~ëً,gwogu/&g_Cep{fz*^?ތF+G !wFv|yX)~j .[Bs*`P+I GKWEwfu qgO :U4L3>Bn,u$':YIJכ?V\H Q]]D4&&W`0wwT1T(HkLWh{ uL&Bf60ۀl 8ڀ]&qr9ąH u HN"ĉmn'aܨʨ|sS:䝕s90s9[ەNRphJ{B;@0ǖ3dt:c;FLh2MT%oШ D8w5o/x_zsQ/Z\ QCYUk}f^Q]!-ZOjo,rS-9rS-TˮCyH ИVJjMĬe%Hd)/ Ί^@UZvE eȽ}917I0=+4TT I(R®k%/ؚDHt׍vE,Z1Ƴg EY2iYDM0-U´\G LMi!- nq)k.t0P_&'R_8\D*˲#W== ]~0ȍzpfÜ_;Ü_;vCF9m B4\x4 : 9 }m8X;PU(v8ؤdGۋUTVbir<m6Ar ! S)@2$ə8mjK{ݿb-1+n_?Q {svy9'8SYYĹ YH 2t慹ʱD+RuDEZY=u"&%ѓp4[P'RR$:[8%\n[p%@%u$dCVF9#@ j xEfB:k. MRhQ^GZJȗK tv*sX E0TIZZZɵ3,CW5w_s Frʸ8g7e۷ 0- 'Y_6B@ܱ߫e ݯ+\5XaDETph#i ( KH͊1Vfw A0Ɍb aS\Sd͍R, w}a\ORX[ɋNN`2_Kbޗ:(mP.>.l7ʀHwg5Гى8 hr<3nnZ8[ʼKMR% -lMJB.p$BZgp{&p{ܝR:M th-DUDi[dP?&&ny<yB%2X-k}nVNo{Lf!7q;9$i;w$@G," RJlP,9f8Eo}sDk. #2k[4\s@GdE@~*eKR a-r4ut l fJfHYSRID@:@CH,$vǽlgqoSSK˛kZC]BOR?nQ ^ ћCwgzz$3n+<}]ٸ7W!B<5([?TR<\&|䛪YQSO:ͷ1μ88B7&QNںEA*.2Q KJJ.bGWkp$Ş# Gf=n[Oq0t Ďvfv!RoZsP&9 vkԖ NuIpyTѿgK}%y8KAӀWk u,W%jAzm3U r10 1OqDWjλ/5΂ڒ݋ -bV1BWϱkL Ib{M;>K\H  P;D ]&];2\s?k;~tRx4FA<׎GY^s7|גw} On a`.0N.>~3ކm/K7W` *& \!$pM1&vMu%\@#zUJ;"Fߛ-;bwtNMɌX菭_ &0ě\yZ9q/uhnlg1nI~&yyOGA$I4[Ґ0-V"A!2bfK:ܟrx;{u6 Djj}QhkSdc̦e)>Ks7q~ƜJ$R@fW\Wp[acPD!̶dqXʈX~[EV *i('F*3IJ8$"B&2JK5aLP"'"16n\?\Wm%y$UXٍ;w/nO^\GRT,-`n (wBm*շܓ8thg>-g@՜:k78Ub$(ZN۲di6/뭛L7wOwt'M\̕AީN+5W;,v{!rVp۶U B]ry"M6egi"5ֆ~޿:Ro]='!Jv/*VٗꧬNyRL[azG&70ppQQ2tvar_u26LVVJmTT耕# \"6 4?9OV]S88.: L7tt&gj~2'2 K;>m'j 1.-aԳ<`Lpv|  Q$ޭڽ /l04ݜ,~ FD*?WuԖ e|n lr2I'+mx^4Ϊ: bM:S,hE{/v,-U0/=ks+B5wZ mM hY;NW%TS쬨k1xTYX.ucDyI*[݊:!he G4]}}^J|i}ǚ3C8#ޔ7#PE@3AzA`ꦕL7|B & pdy(Ŭ\pp)/ rgڍ^ܡ<EҸs:C[olx0ԛ4Ӷf&6Zn` Oegd gl ]&Vۉ[nv3wxv~\6Vu".zt-[Hgs'@u[rQqi~vtkD7ɺYצ'i._p`EN'Iz68=tΚՓɿ}Iyv}gQEw9~3eП3X܅oߏp~[[ig[ϸ;Γ~{սf^xC ן{z~~\|p7>~+n0(4NR{\9ߍ`^PM=2Lނ5Jv֮esE&Θg}?nb'Y&:U6/t:1=O'UbsM'SwϯJWPI]u,:oI6.p7^7<}o?0e ߞ00J_Oߞ_x;={#ͨ{ šbgC׏ո,]vtoKN)Bү\Xm<g:KWK]yka~;? Bi9-  k8ԥxplt5}?~E&,ցo_ABV읂Yb9~ {=wE />sstvN, Bb''#0m+{+FOd6@f w NR7vZZLtaE,OQ/J]H7%p& q28mF9Phi1ƷE,?<"kg yHU;:m0_鿭KNWScw|CYHI$}^rE"4 hbd,H0F2#| Ӯ5("Ё_{H1oQ@<:=#n9%BH_ 8,wx*|N>ۓYtDl9Ј@Rձ=ԞXњNɈEUV!خuCۄK5UJM 55&PE 55&T)tuG 5%$"(4HM- 0Dc5UY*3Pݡy&~Y k0'{͌0 u8_Aaj?| Oد-nhq!l,;ʺITH TiqP6֝#ml: kXXm[= $|$$*0vQ蛣#W?]qL|VAE Kh .bk,Aք'YD~F"L C]m Q>^ ?)9~OsB»QeoWVv9@,XS6֟Tm8z1_]:gk XgvTc_`w~/o>:nm51Yfsc;Vo/+w.0*8{*iM$Ea=AwސGEQ|\b[Å@PO hS8~2*Ab̤;7Ωx8gzfz9q-t*=:D,="n6mZ U# Ik 3&0֑HDKְ(4ZE!L[b&Èbv j.:{GAգEG6XC;pX.*y>(s6&+Xa ALhas&Y}WKr:ta_sw58Slk0Ά=S ;''6qG^qɆӇNt^+;yh :8u$ 9Fx.bjYNӣ)+8u%dwt84ðk(c&u #±2.MwElyٷ6G9zԢ+gdAhy2ӣތN<Ql@ ^*ĺ. n4#ɿ"GU?. 邠2$9{&)iH 9zWu=z}шuTH ׽ 7Wפ6%ŪAR9V 0g#K) ɓF-$i8CVj<r;%WMuWS; S7i{D3kUWcG#>ZvoH^kzٙha!cwkؿVlߏ$k!~)(£ggEP+R',EWsQs풍GSExokf (4!;|Uv=ބQ#^F$YǢV$R\KaQVvƥ;qoT>R:Jޞ̏DRnˬ$5' :Y'flUT`v]b8J1Iۋw8 wϦ[94OYSA4dJwPG*V³U>VH rw+e@dhHaY2qiWF> @ AmiК\]6ٕZTe5j1!蕊PY|-HqHSd@I)Xxr$N2%`"7)2g@H@Ŗ i$XG%ج!aH!BHٯ\ѡNI>T@=t`"$# rMїI0Dm~ * |몆=Ė$)|bƓW+!z-w.Βܰ Â)L>0JkBM Y5 P"D57EA3^cs" L`Z&r; CD$J{"Cx݈wƘg*H¡o-lTU@"\'dиX`L4QZEkOnCQ2鳂lSjK5:4|NPfRx,4@@0 !+z"C ܖP4 < gǘ !V33wp)24xӨqS'Ƶ'F Q|PQ8IlgV Mi4fdH,1xDO>d<)Hi).u(+g7AmϚB!H*V[ fLiڐ'a7O 1f[i+k9- &@cfK|X:dh9rM)N$$DMgAĵc`ĂfI~J 8FsO5VeLE/B)$T13DA6)řhkiw(T\>qOpb_5fTCoF bH6ͨ\%)if6@\$m |85̎#n֔Im4RYxUɍ%Q7NRyb7ήY<?F JK~&~6hT){J?Mx"J EHw۞5 ,)L,~Ѱe]=7= > ]ʑ4m^VP1mu{!Jp%@p3MZ۽Ԟ&7y4K ȵ/laM_!|0%i f%w*=kEo (D!`D ߄¨H1H]RXU:Q̆<lPZ+ h$qg =|ݩ::!7EX߫g?.4SO>lz)&o/m;Bfէ ~;5{VUK5JȐp߾w_\]?ޯ /OcŮj ǭ0h"U\##q=4`{G;~rrT9uSamI#f | ͭn; b4A3~ev\ަo& - 氱2*N+KֹgdRzTu<Nӊ9DF2RZU6]q9=圧Q1H sdcg$;Gz֬YHρ(d(߻9dMV?g4KOk&Fpg3F^_vcwtۡuqvidž>q??t 9zqvPӳS @QQEx#uXmn|o~SPPŇoc.lMrS5#mK7cr!،\TZ7APY?P:۵uLtUƂ7O5`eq8P`Gy8'=iɶo  Mg_۷)_bA64YO!y2F 6rD@oZ !¢TZ/5ٺ]-˟l=86U++c%ξN!utCf9YbBZjҌ,1zޖ=jg94FB崥$/ש®ՆȰc۾5)n5,ԝXpiͨQN[3j~?P^P4kfk:~֗V!;Sy0Uh9Cj۹䩅as鄪gF-2V 3^TwlۘfEcLS{Q$AC ˥:WCR{w693{߆;,'CB:ibbdžRfOKxEoRy [k9/< 3Y99zUFx#`6L/hg*l>l5ٞJ@ESk|]KnNl9ьI+\Jdֱr}J$F#LBI%,OqRJq,*w-VYaoK Q"SAP";B$ۭ'ldk r |2B&>G#JVN\Uf=UUBo=%+^HݫOEү-aԝ %h ! `J ]@!qwErH &3vr#xO m3 po9)V8S>C4b>?X9nsUϗ@-%lJ:?yzv*oc`=EcD`1')ꡜ9עn ' O)^.KLZqݲkm9gRK0fpTsSZ@ X'e24E+ l֘cH'WMYA`EU*aQ±E=e43SK_>onAb7T7npXU "}ʙ ]Rifj;b;Al^^t%z(͗>?Lr3Wa۾.?_>~håWա߸"V*+D`S{fKVf%k:? ^hÍdb -^K \ $O2s$Em8e6fíTSb4[96aׂb.oj\p_w :q,#JhTH74h)I!N\E[R,DžG@9IF}Vf#&RfmH+q Ly=|6/7r\dХO,*r0%{#* @j(OD )&ľpAyw;>O;S܇|0ũM?}LoLQ|^qvbRBDil ! %8Vx1WKD+:18"̳tdvL234j%f$VN*O[JF:$-DЪf,Ά,&t- pzHɌxlM&)0 s-Mr4=t)1t7{Jq2h=$'L@Q-:΍l5;=d8Y&s <( dnsAIIb٤$;lgli3w=ИO&m <8,) v6E$$ժ3i^:l3MD2œ賱7D5SѶřѴt)#z(H5dŃ5 Lx'E2ΈNu(IdiPM4ِ i!U)7SCxNX3!>譓l[7MWp.%6]2;"n=S󢔑&2~)?\r=}9sjv}7{pҥ;BZ,$.p$K`n/Y Q р]n?yJ?9xu\׷w;sD|\~!tyxfB 3R&mo362YLME ePw;y#(3"4tNϔRK]f@H>%~u7/㒯UV 3Qz|q_ԆKbgAvy"kPu턤*idy _T"][ƒ+Ʊ;M 'Jr5ʔX`)&!Dj5ѯיt:κ|uGC 1Ѕ4ch _ͳ1풀W{b[24)z(,Ň`;1P8tT̆އcϮ__Mv>77#n s.Õ} ܼ7#Ā]#70hn'U=HOU#o@7D,Ԑی\[&͟~LBg:zX)k3',DQg?o?)`wf/|~9޻|O>S?A{]?LuŇV7^3_4bߺn.&V>5m3/]hE4 G.ǰ$᳂.f [D.:>HijTD2X[RafX4V+C@pkrLج0o2-,;coaKIVowyyaUП&І]UTPi)\i6[3xzg׹·{~:>7ق_C[`iX+@V 9:5pf"I4id 3z \oB3_ESK_Ċ|R|MkHI O`'%M\Ā{"ߢ8DHh,(;[(Z͏e1\7.%=$eS*_K~51 '&I$%RaQ& \†k!%iy-n4 *` 3u~g<(*9eˡ BN-fx2SgѲKǷěݡKks27|47 ],/@7#q1|osDʱ-w^Dyk{ѫ`dWQ~3\xFɖDÌ@c+'5AƆ"Lw[ -9W.8Y|U#šJ:8Kw>?J[c4vٙs~`}8RoVjCp6Ct@0Щm=P"&CqeNԸA>.VM1*F=}1$쫗4pZh")NppC,NxÈ@9%2SiFhm~o* \9(|Iv/MfKE.\TD=WG\\'Bty&wOhZdܬɣ2+>#O.UlgajZI;}|j`r!O;cM z:PH=46y(1`퍍XFˈ EC6Ӯ!EZġ{A ƟՈ`C=EEMU}IkGOE,h ~b+k$j&d0pC{IOx <]}A`ɵ9&AcosM<<=CoB$O'Y^ePU9Rc1J`򛴠]TKW+R-) # ΙaKjƈZѰXeBKv'<>M2.UXD S9Xrω:.öv~|YSV=tBv *<"Xu \*u@,xz_海s>:a<.vp~f/.OujcTVs7ި;:}n~5[1<-Ĭd9>`nQߤ}:uSϹšK ELxEmnu1(#:uǨ.g?O5agn -ݚ/\DkIh7H+lcTn3OlڄnMH.ud.AsAV2Sw*rSΦjZV5!!_֑)R(BC9ovA)ѩ9F%vunu -/\DȔ@Bj7D[] ʈN1h >"1Vвڭ EtKXIb=)4ƻ+qF6*M?q@o(ژs2C"wfO&6I:nu}sEAcS0 7TXģX9.b Y؇N$0Დ0{lLL~-SIJDW^'eIL8!J*$3-5bGCof>mQ-TuV7!@ q6M{ҙ3Xkf av@=,n#tԡ 7J(Jw-Ҥ&u pG^L7Q]މGf>w51` /N-Bf yu]2R0ïBwDP:0s0.0UI0aT"9`xB`tNƓ^!"XjŕwX?}˷Y|<ӏ6OC~}VFYW/ܐG [Ss f̴;ܦl (tZҪXϔO1TNqy/hJxu{b{TKו2ǒH,_nc BgiTwؓb+!bĒaI(]%1]骮tUW+]UNHKfvֆqܜy?#0K4"`[+R B,Rq⹠fϗwީF¨]H:qDK%H3[d #'(5qgp$N%B.\|nUK~AhJqlӮ1kEtW#^XiDAEȪnK;_k .':or~($e},V /IQ^f26+_.FݑvǶ'Œ۳;?Tן㢻#@S($WFӏ!g L]3E9y‰17+\8#zk"n2Dm?Kb T-#֋-5G^Xn9)EƲ*pû = fFΆG~Z=ꚟtAE+"P5r fQC@d(#J ՅʇP* pmIX5n/9%8(F Un#h%CHաJXKݻz3~lDAOfBF(-ݕC%Oͳ9JS> `0 A&f4e ͍2(|yig6U)s͒0Ź agXQ,e4XTj0 )׃w=MK- ;2Si vpȀ QRA60"B%z=zS!^ e8 $E`Q ODj v K}j4%o *<{2T Ks"}P >AL(yd:BH.hMDNQ0p<5cQj9 T = PuReW%Rex>F/ӛfOKfS#DiBv/mVE]a Dh0hB`g5g`>z6TL-L~'xn޺f &!޳˚Cj? W $1q>f )b([I`Ғ)QȡvlJTkw3h$9},"U[3zK4[-@98Jn5M 77]ZHzR@9Cڵ9> Z(MM++81EߴY ?22]py:;lwScAtC("@:ج@8s\"eDL)0br.&N`Cȿ~g&G;l5CYw豝dYx+]rzϘm \b84;136j56 py<vg +훷dE)08>P TCj8išZ̓>a8~78{к~?pi]?)ru}91n(n[l]n:pXB|[WW+WN] *u=KgzF_ge2@I0؝ &y źc;MRVK"%}G``SUWuuUu](?6E~{}T` F:^r!f ӂs7|y%କDg&bYl5МCΝӵW]X^zdQzB`^Ev΁[spSzj5rFv[ E6a#:8ab˪8(Ɂkwx(|}{*|iP`NCQZ{+4_/IWQs_|hdEWT'R8]3RtrKwV^CwOŘj;U6ץR0j9>Oq‘VKuO9E֟ț::I~"+pU5vI.uyUz[G(7 CmСiN%{cnGcNb*ᶯ"IBS kjyjdFi1S<(eD414aqCR/ 8cFv)LcEx9=-/]k9_duˆp\~5%PlWg٠j5Q Ȗq9W3ʪz3e>&X=^Уji;k9X{LoW+?|o hY*H;CLfZjs/Q}n;SoSBZP<]j&.NfOq*āmx ]ǙlA1GNq:w.K5!XlEt\2_Doxw9ڟ$\qƽN<{5l9MsԞ)9+`t4E==o٭k'Q|sOn]^T_Y fFKۊ\b #!C.o0ב}y~Ei֟8ܯvv@9{[j.}A*9^<${#&:KՖ|峲Cyz#sB5ƯޘNiOS }.- J^@h{\=t <| = |P&6PbByZ\a#,P2JqGtM(m,'qdx8Vͭhpݐs$OQgoۣp=H,%Qup6-C/U6ӠKA/5L|,Y{ ϸ)ͱ'oA΄9ԙ*Ǫ\tup F }^)Fyut'ʴ"bxϔGC5zTb pBE*aM螎XcȘ竁cnOJ[Eb̹h ;.;5;]z3=^s[:ǕQTig;A4)Jn0h~h >.>7A  H9R*32Rhg ̡ bMl%ç7 ^H )|-k2DsYoP9<!J{R$IFgJ"Wom?qcԓEQrB 'ζ ΔM)pwہAAH?4;uی5TDE e,QxX=nv~[ARD\@HP=\qb,/md^\*$JgU|ߟ $D(GhumI0 q)$ʚ.;2toC2lϐ`-k d٤`[ɂ<^ ? rz-0&_z./no͋?1 tU,Z$>,?k0*4 :*:)AT=H˹ʟ]5_K ,\VWRE+8|Y?^ Ɠ,_}(<0*J ſ?L|ϪkK,%aS5l\+Q3_=Rl)Bj/̒:n&WP@0B)fW2͕lO٩_6dJD[d1X թzk=X)F% KlJAꕨ(D ~0)<8BJek>rD%cƩtwSHH?/QZj&5>l˜d2D9H.4QIu$jA,{ޕ!޺ r)}.>eFn)vF:K>Ԫ?Mu7-|\|d i;Ucț-'f6l*Чql䠹?~8L} mD5RA!6b?K;2dI A;=<[Fۼa<_JjxUD6(⃱xwTbSREc;9@ݺ H"Jk}XI+!"?4b|%$_%!;4]v;@׉ i-i#dy36sESi)SE^nF?j:tUc0#=;ń0)0 |[7=r}77|B%0PV(m:o9==CZu-v:9;a7ZzRvP$=9t*% }U=^ٗP<#O](5AI)af)ʾIJDesi"$yyqA)D.iv%b DR2$}F\0SO}rqJd5?^*$:@H~TxZ_'@==4oZrEO}_ VH(HNKˈ}_jOשC # KS,6Ѣ. $8L;Y:GrbJO4Kl{dw~H -k FHF=dimC8;t.@ eKO&C/1 i}YgPN0k-l҃3S!Mh@Mt9SsUNcH٭6(!F 2 d.'A0 xK %mŦlm9k?,`t{Rxl<VkfeL'+-Iڜ&Ӣ=p!<|{$r,ޖax<}]"Z @:8x07{΂2 o.f vjAN,/ l6x REBm;p>M674omrx'}v\rfMlXW<.b n32ԏVk6Am y?qcSKNKkE*gvuuCبRƺe{ۍ2KWO)y)IDr5y鴴kSwoaK۩ 'f ԡx׮eL,E9R" Xr_de n3e,LEű)# 4`2ia]mF;f?.Δ(IX14œSG21cJHDcXD(I(6H(h1d*vawH ;tHwnz˻$aVr\$\|RѢ|Ö{xψ@45iՄ˫=E-t흿I.XۏZx,< dBmq3^P&X Ӂ{O/a@%đ'xdw}nWB Ij-~3̩ *7čМroCRT.Yԓ E n2,Ӌ~ݾdp+70Uj^׳Zi^(yeKK_xѺY%|L#EclpE.-K^ûD${ ^1Lg !{p{ ACyS}Lg GM3oF#hel;Lq]=ݻ(~FU>y%[)PXIR^lAjko/PN Wjެ}`j4_PaaZvn@ <ڃ޽~G◽) n.>\`*3R*:La^h 撫sƳzao8tEtR!i<(PKU#mNBȠ@^ȵ" +udxK_eÉo/5{Ȁ^GRQV<7mx/>}͓{J0 vl F(ZRVdp {k\}nW\o=rnߡ+ C: ]}TvY7}˂p0;{ڋk; ؛L5 3mAW11jߏy > ~6vy{]?Ae ZfqR֘D髻(To>%cC*z;ZZ'Q)L↥VWvQ9GvAk$9{λ$d<:Ɗ uz7P\^^.BFa4FF{}X* S!TOܾꖈ1[>b>c1GuY4*۝q%9e "_3?ƓHaE{Gu+lhηnk-$'\vL05.a1 ~0:%BO@KjOuB0̮߰l)޻  l]Ѱd&J*@1aZHSԺPٮS=&Ng\7KA3mFB3\Njq_2 ҟagT6d5GLLXLJbbRaf(wTJ=RL$V mR9򤱎`E)y$iOGD !tCb wo9b"0RE1I`KtR{ 8RL }@Qx(Ȅ4 f#0 E${aF`SP"HfĤ*Lj~uʑEIL9+ᡁi1S*㙑϶.vHh*x ՄD!  . .$H1dH$[˳jC/6*5;IXJ !dEjXaEpȓBxK\<i)^^֨c˒)9ohMFʚ;=HL$`9OR7ƻ@\5uF(:Xm- o8`vڐ`c<Kaum0wũ*"jUrV_-GVCQ !V%z`#EH 1`W')V6$SzE{* X7SܼON$;(8\\RsfKDϗT)aZUk1,"`^\}!^I!Q:`!/jjT0O_4ӜJ0dE>f%R`:%I'%8fgH0k_vnfPP%el.n!Ɉwy/,eÐɘ0z!ѯ®}gfv<&^a BuI rШ5}%E;rDHǛp>,p0dgkY%m_[?N:x:+S'9y}0S5M3%|\X.łuIwnfn13FLJ;8Rw[u?έ3 n%5&duqns+[I;-[uxt;9ua|KT(AM%st޺ue0NQ*+sd[n$?h^y8 G5k(Ѳua9ZWZJ͓N^d4z'cK6 ;ٺٮuhs']w,_{j c-e=#XFj'I\>z8<*VUAlNX$8lpZOm}GHKɚC)~@Qff^SH9t n)FAhi5Ipҵjz]TksD'O"T7#!,Q<`ʁne1]z5&J^t3 )4viHyzMReO^pP ;2\v{[tZEO{1W$-&I)i/qS2)yJ2o:T*ͿL/=. e$Fɍ!}2z( 5 >s8Q)VKϞW*p ( GT+ɮ ^2( zeG{ǵc /eyv~#h0Kr;Cg0*-b0ɥX[o,[%dZrkQ,ɮY*64w4]Ih!ð` !J&)f"*Ä3oeQxŘ5-(ؘL %mPA(K4$[J `r$"]U&uwt AqIOQƀ[9Q9fBX PEW,KVvXaH~Zt X!G $*,L|^p$RP-=0qY'\k 4y&DiCԣu,/\QHV8ɛ5[j$7dz"gCCҏ!>X8pT2s" #Xpw8hGB,C4Ԉ(i[kX:[Iȅh WZIv3 4գQ'(b@!l޽Ǽh&a:apI+mm04my @Or00ۇIf Ylv3:2dG7n7sF|-Gad`\ ȁbuܒ0!d< {ΐ@XDIɱSq][".h 4K8([X`LdI9_Qi]˯_S*jUy[#} A &Rs#Vd"*>?Di9Z5`O!1 %RJ`lsTReZyZd!q2s]z$׷fFlbof9߯4LރwEcK~|)%g/]fJslS™DA7񝽟޾C1 M P:cPOKs])Gq)]o0 q[a2G_ƓYлg<.=v[g1l ϻz߇\а #A8Lˏ f:j<|Zy9km<ոy`B0{@=\Gc&,|]=;1l.=sQ Yr$!_fɔ_!x e]nNnhӭ{1BilBS[hLayv͢j\ RD'C)UMlZeꐐ/\DsdJ#wwS5JuTVW.^^vTfWW3YNj}sW?~Lb$S;uipPF2S1K-dЄs,^(N&R!sCRJr%W*/S5%fmb?oݤ< +3)NőƜ. 0h\*A0b)ÊKb0T #Pn ,- M ̈́ ܭ뙇 V",2)tEx5몿MGC]Y8a4AJ!b)C M됐/\Dsd#vêkNnh‘+nلj:$ VjD?{WF{{KC`gN% dʕ%GK/YRfmA\<|g#yHiq-T>:F]k1f݂n14䙫hN1TuPhoT#&Dݽ7uQiC’-1љ$LS:Ն(nF%N(jbiY!4j̝pJ6CM$WOk#h3W 5w:wO"I4'P;`T̕19RZi 6ЙP%>SB~4\4,T!X(h #:{)6 (T1>"PW-j=bǸQu7KxMOCCG$M^GuBǨbݺpZU4@0ňsϺ=].eQva)oo/@0޵U4DXݞ'eS5i0gqIg& $)nefHhI,>Xi*rБM<3 P.2dJL,řLq 2256Q`B5kȮ  R|vt6V#n޻E5䙫hN]Omѓ[7F{0T^ajEԝjvʨw"o#7k+r7[tqs߾¡ /:le&[VXu6^Z3 q.t&3O-mG2gJ[-!"6eevsZYfmVQ M32#if2"4gRŵӱcѴTJH̵4Źua6 -r.l4R$2yUhƐ h+͢Y;vu%pplKkdyj`'@BʆUXl ]QJc@atKt,Xuc KKޔ͙5YͦbPhjlǺrHezF\Bazk]~,5l`? >g|KTdV"t6q.}E6Y 3M!nӻM_C'/]+ ]LT#iLpt68_HPY؇&?,tG%nL7cxVZG2qX P 0q#Z3dt1Wb]"yɻaD[goed0%y`G5c.#r@UECt-J=kOz)J=j'P/V8:/.;s} )~ u>m `S#[̕ADJ,7U*/P>fЩT}I ȇob_Dݽp;0LfO%H I(hR(gFj)b<89NS`$i@Ne_S`EDуۃP%O5T"%JL0!55]i& 9`qQIK" $I1g +ou+_}!@^Ʊ@z5fTSfl[Bh=}oYoi &(Q@[`Dh27?mNIl9_-;5;CORG ҩsm&JXtn6"{AWdm5ʖM/Y+g]u*[ |o8b!{ ]|!²1^ϴy$'=v+U*f#0D*#F=1ƟFr[-e%z6ϦFՏo^ۿi`շDI>"6R9NJ!yrǻ֍o sIߧw~FVǼ3j ̃H3>_ݱC't~XL'+dZX)ju8$M#J9[0!z/؈RgɆ]Pmfk#ƅ=P7d 8)]ҽ zYQb43]Dse0n}}: i%iT4M6e`J cX١VZt tSI}*`{)rTg 8bKJܧkݦ^=,/7^&8&5zV5,a 7`Q 'Jũ Y$Dʭ՚YUԕjVfU 0|1F]6^?:ɈU6G!YZ]Jc lTLVZѨYVO /uPjT$ ^ƭX>j}&1?֛[~Km)+ 6jBKmZ"I9xg ~_NAŹ>Iڷepf ^ê x`4dF )raC9Ss>o8vδ @@v==Ƽ"0?B m񛋋8%]PNtr=:ھ%oQwR:3\Ҧ"s֙bp^:n3 z?\g- .b QL`" ǥ<,2N8=Tp߯CPope+ec]]'duZ\%G}B2A^Tj2Tp pJJ[RǾNigʯBBG[/BH&;LiNHX~}MB]@5͖&/F%֗{0Ș`mw~HF>'Ox Gc\жc%.jUGіAQ_04ଅw^OfY: #^"Q$D 68ZP'f>οf^k%fBLSlDp (":*Y+}wWڗ3V;kmEb4u/r?%"LP >Iv,ɗ.VZ-ɲf[nw,En+sfvE,bJ=7qLpsQ qZtZl koS&\i#T8evh4<ױcEl ИnWdSA= RX ; _"M\1ƑXξEvKvNJW=2ңD vMQEa)RViU+ f%3E֕֫ଽ&18isXљS0Ǜ8u v~*8;Rw[5ZnǬ(bj >O7 6D n+؅nԶ:m6CoSء-( q:rr_H3CDl{q9W`er-cu@LdQa-("ū = a/sJʫsn_ ଊn.OC/WzzSşT n/G!=~wc_?ӷ:̺:̺:̺: xtݗ\k ƹҲ( 4ac%H*Q{Egq|ސ@{mhq7ΕYt4LM.|Ql=-ioD;Qp(١fbaݡzT2D6g!IG;hQHkP:RnzS,mj%mX-|5Dy9tI9OӇZq1*yJ:Al GwT9I,Tkc%q[)nZۏ:xLfΧ?DٴAZV5aw]#sz.0%{H@Ew: 08::VI6Ikh plm8baw[5 D̴L)}ydm07 'cϪ\/xt/B*,NʩLܕA|gn8Ki9?U]d̲{ӏߜF6}$~=ON2^=G- \Grgq朡$vMwJf-L+a "mV#SI2ʹ "%D }H+S+uN2$(7Mr_O\ ܩV<O BS&p Ô5]MҘ&o]Ym?_kOߍ sO΢ʁ|Dr˻q7w?]L&7߽w0"Hrr^$?>xDQ_W}?od~bלּ)y4sU އ2n_iφh -6Xx|?[QB'D1@;񵚐i9b^)7Xh ! vqi EܷkR[ $EQ*кe=yڜ>ڎv R\U׶>X;ǶZQ8[J&vFX"UAv3]1Ct]2q!h35=O Yƥg7OnTZv&9D ;8Dk5/@YLL'?&75XـnRaоdTaeQzu5?㸃˻ϣه+=׿;zHVb9m:5x8N;5p)8jUb/q]="1" ;$8h)w&VЖJ+W'v?\Kwɍ5y D_:e~ܑ4,={h/پ[5|T׀m#Ox GlQ_YaSNVi/7cլ})nf͏>" hAcgZ3aWGam|]gx1Rrg?|8lb嫽-ŖR !gbUQiylx%E%"וMȓMYL2VۈY[Üf=ZOWI0ՂՌ`UQ(Cd,<zme>gX/vڱHv8sfjZx MˊM2yVW}XW+a)@+Z:RP2Â%p҅p 8N xT]U>G[lpm Ƕ7C0ui;ih  bߙ i/Z^몸ᆭa snuzY_S'6ZF=ҹČB!}HF@MҪ z#T}M[Ԯi7farC`P9k}v 0a6úk$XJAQ!K% $2ǓdV"H_mjmX-&I\~[ߖ ؘ{ t)9bt"_XYvNE*Ɏ -0L"&rByH4bdw$I+=TĻuW[sV<4UptK'cf`f$Sp>W㛸x\3Kh% -H uKt9j;֧@f@*^hWEMBqڛF  6Ȋy\ i]f WNզ JjӚIP\j2iD6ܐRؠ`HwRY1-#WHdoGE.-7[5Z74o}e֠+AFDKVZk_ cحG}<[Њ1,dl69lgJ?FȀR2ޮBsQ?X [y;`\*!:a8 `c677Z5)2C @b_l>ɬ>Xx;НF' lAXnGUѱL*W YlR9|Tb9;:Ú-ԊnMX&_Cukq"XV  h%QdQ+ !z2Ƭr|7u~(qhuV i5.EC\u:@ǑWOJ0k[iLT,DǘQ;Zxsmլ0}wj4zJ a7> WhhT7׾N=8H #1̎İHԯNUR8NFI*E2I bU>TW*O;ҫSmjuu2h[&}EK < 9)ߨ J'TEi(JvFd=*OјW"PbsU\]qhEpZO&ɑE2쩅κ00ZӞZ^B%b$3&2J32b !JnF.읙Rrށ(;UH=jIKfr+.\sU>aqZf6U:Yu |P#L2:%K=BL `kۿwfELOT F[;#2o,YWr L`nvO s6'.}Z1B}wK~#TXVP粻en3NO^ aW;YöOUNœ7=\+R,@Jݥw@vn 0fHcvkf^xmJmtV^^-1߫xsΎ٧oio </{Wƭ /Sw6}Qܺ5wI%VTHʎ3>MJjRl^D3f,E,6>,匿oRO t!f0 n5JMN\di2ef,FYRSـEOf)BϮn)N*Nn o|V,Y~j4麼Koɇ]6:yyi3&󛿄7V+L|VWe7#?1ݍ~Ə_y/Uca6 J0\z K[a:'5\`cB5#e,|}o;Ļ\_ W\2_,>~owl,O)KQ(y |F%rv7$o,k8iȥOK!Z7/=ePa V2E_=itE" ׯNv( Oh]9ɽH['Lu?35yi<+v;]Ӄdu o]ѥeP8sPxq;À+9A QJ MHF a!D%NQ-|? jiPUbo±UY24ޢzL8"5M熨ǡs30(27fw# ) Ot$֚E6b)7Nq7Qpg!ݕy&|@(}&2jCXq6r^P{|rS9⽁? ' y,_[%+ 6Nrqm]'0p8Iy{w),eh1~tY;EpKS^wk"U3mdˇ&Bkw! yѷ)Hp]Ax2#*7d2,Nѧ5#sWiUSH.=ܤ0tcdzji"=;1Cj'x']yix+-d߼xjylpwG:&O^HONn\UKz=7 ).]>L$`x2𢪼tKѾ)n}SKG'vO^N$2wrӛRPh[CJ="ay^y^bSS ډeCBk^M!D i. Ca LsS ŕqƱsBF 81.XgCDJP,M5RQDgQZL^YyV|8ArirD|x몷t~Sv(.淣+,8bbΥ/RDK;?JttO=lS5NSk RҎ_߯#9?hƺi@~`YNfrzJc>֮Mr}LzE*mT=CwlBy`J߄Uʵ?uGeq"ɸ7854[v\dv\yQeʒaoY2vs)ؙT..Ι@&UKs,3ʔmAo.M@&%<%"=7`JQ$MCzt }N7l]D3J-g#;IA G ig@M?يX*V4p43F@r9݆ؓzZr'ў0TE'+ZYr”QzLO*`W{akݧH#?SMGEX\y6?OI\)~-UᨭҙyW.7w݌hN5UiZ_3o&TY nlV/ViOլy8紲7cL'^v˿bgnƼ ̜λϝ9"(^0 hJM({7qY;4)z C<,$RȅIu\r٘Q"Ei/RWtq*sDB&3ǂ*D'ޕ9R"jo51*8֎cH0Z S)E* s\X]Iw-xvs`kogKq,`"ƚR#`9&Nt&FoGslI&dlpø0v1RL- }|o9%Z=E|9WNb1.sLs+DViTʬk#+$L38XRKSӶ|M9SwfgX?=8IF&<not .;T)ڇ}ࢮVtn6VVF=&[>9-wZTh0UJrNş?ySşs\3ӋӴ -,A9PUEZٕA2eň(+MR$K ߾bܴonw,` < #DbђAqD24U1,Uᐎ3!c@-< jknة((XvƂ`-rF(s,Vb0"cQ0r4\M"`Dëg'|Sp?'f'q_ߧζjGjٻmlGpoݢ@=L~lGdR4&""ypQ!PJE~)E2wj'4߀o>,\] n|gy}lxhUj~30dps=psۇ;uA6Ց"?⏻+Xe\ [9bˡr]buv%:/TRFWO S y>_&JW{JcW0C(S-ڋqFB!Gg ('N)' əKC 9Zu"Oeyq_R)v-^.IzVK>-D\7װ(Xuu) !!g4uŨz dF|3|w'n!3XXż% A;p缏ee1.<^ӅKxA΂K0Kfft{uy;-(]J2K4I)>l"rI/u`1Ee»PY[r/.=)LJ NO=<ҡCRVKㅹ9f: Y܌ww.UP\.V^ uOj [C=aMXG ;֣1YNzv?ْ~spF9UTk}hlQQo% i:֡[W]&B}aeQQxc:F ^Phͫ; p):ॹҘNìY237ݢ؍_P`3L?oix)m~KǓv嫻 p'Gḩy*=>xsjv:v6gߚ~ݝ}gn`ڊ>t sGr8ٶ ~umO Qߣ:HSl3rR11R= PSA(k3}h43OA.[y3* ?:&|#7쁍|8w"H6];o2mO zPi]]Z(eL[RAjq5Y•b"!okG8lRm=׾Q50N6^ل _e GQsnh Y:yv:pr}3pЩ~zUvoF!V[6 ~&!0i(]ϴxI@8B΀j(,w[k ⾯]Q0(s>XB!hlz0pS"?Í¡M!0cx5yDQ#&kA`$*&d tIOy *ϓrj!:pt@ОC 3c\A◖A:xuR;z Yo$GS&V{"kX!g!r6,Abi1O3΅=֑V#@cw,=ļI)VEXZ't$#~^&W'34Ssִ@6^LuIqsjRCF 9)96N;ZU#j96K:N0j:z;# MA{7VT#ȳdGF /őu+Mӓe6J^j/:7}i\a1'Ay=a[`߻A,M"F S$u1xbq5՞aU V( KЮ!BONM?`fV( 2 }2;9 $ 2m9VȈ Q^:{{=Spa^ʂ{g2$χs#!t]wg_q!y^fLK `\lTg 1wNP4sR%cޏUofk ٿ )z!~+/,Y~vZr4A\OD(إ$jz>bAcA}0TDҵ(Fɥ$ gR}5w kRKXRA}vFbb]4Xuvbb4GֶA^Ab gbЄT*j10 Ң@M(c5.!(,S:Oa Rf'(G$TkեŚ"}v1/.֔\l AwŚ5Rb6'.f1vqM,L l`$S&II rЖVgq4r]%(Y.a;-( 䉋@&DcΥ x\+4ŽKE& 5 Դ,#3l3Eҽ^]?~}Lx!Mv8xs'e@P$?Ǔ` ; lS/R_e[RB&# (ޤ$:~E\c*f ; 6T#Glc@eM2YR grBLM5h&Re #JBY B9ey"U`SV:[Hҙdgn{րWqdž͗sfsB3 ww0lv}oHjlVj)K+އ/nofNy`<9ju=L[n 7U$Wdb;ХT=r ?k ;V9v5:s!gN/{m6޿ǻ|z?3*/x~?"{\ ׁS˻aJRgJsPYsJ5f;֒Rt欝ig ^YS $&֎!g JS9kbյfv:֒r.;qfجmϚ!Pr9P\le0,mvV\F;zt}q*lٹ6Li,wEgZG >\}ҥƜ\~:jł^/Ϧv |"MܥPti??1&X{b!boDpȅ Oa ߤvc_@AqvkȄjMשTn>8YԉD vtAbPu\ǨZ'C[nhpu)J;묧ݔ]nTa1hiˉJִ{iEC.E]xJcډ}-&Aqvk;(0n3u&|pȅOQ։_=FK߀*:F"Mw9l7"pu)F:_ ݘRL1d;Q3 WP\/DXU\8:~Юvwvsà6F=-XTm73m7wpu)Ek wF *:F^hJqkTwEC.ExS-- h"Aqv YW. nOm|pȅOU؝5S"ֵǺv|]`:@{Qkwf^6E|YcmeSRwrw XJx흯ۙFƚXS4"5ic=Qߣɹ]BւhIy ^%Arѻ>S%(b݀GIPHaܻe0crSv cVJs1UE1+,9c* s1]YY1WI .Ǭ:c٫$0.{cVM?!/! Q$$?9ff_Yc /, 1s$3|Oo>>LoR&DNonmRp`:Y~dc)BDbLPfg># Vt=$8K*o>Q:Ah–B׉y? }=vRaVu+8_ ar?Y\^0d!CN`&%C,SKL Rݣpo-ly,F eDZaC#a2l"Jw c)J(JUW@()?f !UރrLb4㒚$C bfjA?KHsn#B,KF<:Zb&I0VFJ <5*Lk2:BX:[e&5S"4I*)E|UI3xcPo*lf TvAk2GJ 0ޕq$e/ovCFlg?<B_#15(qP=H^x<:ӑtXF>z1̅IXJ|\Z*|`o@J81E <1]rLt=?w/[I'sfB~тwqM:5?;ws~Hri0I{Ax쑄Vrk&u!(nUGu$9՗GncD:\Q )"ƑDg`Yi%XT h/7Vx`3tVX# [k(ρ%Zy(ݛvA:Z^z92(:cte AdU]˥<1_W+^B)RƳ,5O!$S{pϾ~V|U*3h*P\)agr-"8~x 09 f)_OB ƅ3"ւ"<欅Zĝ,D:Hwbx J;µwB?[W`I₃N:Y2HƦ"vL<ƆYfDb~MpWddRD ͻF!7/S~O^w7dvJ3:kSuuw{㪐} QE{.1ȣ1+s2r^oПg3P> .?7-:ٗhF.|$JWg8ެmk7}4bz|ngp%љ@*Sn e R1E8wKȧƇVE8s~z*ӜiQZOmOsqBd qJ=hDGT>ِ`>:Xz+:Vu78<TLKscc'EꚭMxTIrRa,uuY%hA5y?LS0v7|Z 5K`cI$=->xtˈ*\5U.78U\Ƶ_7vk΍@;1O%yEmtU2wq)؁&ӏj|U_Q+c8b_sKl܅,bR2ʽ BX`H͈ahG8O/?$-\Wܒ O:,LYaE;gS۟qe0-Qg{}c_$]l?S0(y%MAm(|## Tj2L\׻3_YE[2hiM.nrQrX-v9;n2aIf|n[?T].ksbqn#c%M@ЌqFÛ9vZ׳,6?ٖpV>qF0r•"')%YZjCI]3$0Ʉ^5ŅRpK樎GM!fRz zOИx^JοJN/78qg_%)9.uJ7~-"]2dP} &C%ϔ^x@&Y(ˌIM)hؤ1䂅\ziIh; GWN'!|9e%9em9eI؋+x)Bq鐆(U:mIU 2#v :s)]!BYT,Y/J$ؼ DjX/B!g/,^u_0ǻ/ll5w81 V`!339]q&Ř0Z*c2aa3y Ui۫zߎWQ6N]ϣّgg~ (Z]-FT݊'Oϟ96ŝT_3Z/E)PζHa%BzN7TK rY 3C JRD02[l%F$u*9UD:͂)m7ǜ~Pʡ?dHcЫ0&O+~}QG>7yygEٕsf/(eJTi,Scf)tzgŲc -,yxw̓:%qggsa-qQAJtMu߈RBߠt%TZ6&nٱ©ʮ5JRaxuIxaոze&$~SBWX`k]ۨm ìLT']آ$BT*)OJڙ(Uշ(PJOD`Iꮔ;qK֦X"e*BR}F4P6d%y.qO+)^.%W)FUEۏcRߐT1PE1;b]7vs$BHH"B 'QGk.Y x u'\la#@(Gfjm!п'6Qɭ'lQkU-rw.Mt~výo뢸}W}W|7 RxV8r OP9qb!P滑^YeDnb9Gw:f_۝T}y6oM0 LaYkXa$f2 @;\Q]Y!4|W52Ȫykh8*|yPS+lHi3S]zgXFՍpT Eom{3@3E9 (eXF{a2x%[Q&a}:+0Jx%!s4}=B2,S _ml`q;53I*Aiv~Vp=Z >-g&Z2j1-hr^SUT`4T5XE\U.L֥p/O5IqyW I^(kM ƕG{hX5q1A2MM,6ED v\#$^2JJjH%Q)c?HB8sӜ[St06g .ZH A?9-? La#Q.zbA$g[b: `:N'$кLgg69 nNn5c O PM?K#F!c', Ⱥm;1th-Ґc+dfut[3c>jє1RNEwnoyNa)!00:0a {{~yP8ZqN;}.[Oږ>t~ t<,)%Χ,38XT&=~kZb9tzm2n%O iiUmH79ƴ.:tS߻jJ)aW- OP&'<|i)cO$R]R2_vVr*&Tt?ܔVPQS?5B}iZ4NJCOfE+O+M+5oiF2jY3#NuzDv#"]C!x.-j[^ѽ/~{ta0 /_U?JdeF7 zv]Cq̵[Fa틊~$~ˍi7 zT)]7&f0%_Td+95_ձ|4Q7mO)m)n\rjisa]ޞhF̫_fg]٪%v{ۭ90.r`\-5r ɒS9㬼:) L0HN/ߞB v"xc)R4hZ)Ѐ/ J#mQnN205zFٲͲ<&Z[5͆JPr۔x>2L;LƐ阷VR+:L\ e XUC{ޜO=9Wd~&~=bDMԸDMԸ}6蒖GB$) S"w!2Q΃ɎIP!cFڠ^]ߗT5.VAݠcRw#)Dd@ *z (cXSl]0h=SO)eld)Ta4Rb}T2CxBP2.Be*mN XghuMD˜#KCd#Ng^i/yBuJe̓/e} wJvV## ]Z+@1k|-24Y`X:<ΞlRr"z3)hB86*)!庯 gj!p֗pf*qfFFիqDpH\Ɨ:@( tX06zuRY0-w:F :[+l9dt<*}0e<=76GȌdn`b'C) b<&֏ZvjI(( odM.SH+ђH#)i+9'"jCv02H]hT" ĤzaSs)jm!R֌+j٤[ެɁ$iK$`-ΘPVV[K5\YNXi"UcdF߯,G4wSc͚ nX4 $*g 1BFRctΘ_6@^ŹɲL&A`tQ:1E83n9ۏ.(ЌצznUVމZ 羁Gr6J$$\3Td2Ze=I}aN(@{n+owɠ50ec`%ޘx08RwyY(k#AbU3r++ęY%112 H$X$5 Z:l& q2 m?%5 L$ XVƬɇCt9 r Ldd:U^5Bi ^΢yt* 9I  c949\_?Y*f(*D)"ڣL[F5*9%4rP>{92=q_ٴ|S-=r%v֛R. #>9Ni\Npx̀΀~}SёʝbBtSrgOaZN6vW ss,ӦheьhXw M6Cxˣ/.C80Lm gX_t~JUT#k%ehޣvuMI2 VLVz6pfjvqAYi ʒOV.kcx7%> K 5$O.fiJǧWNN]D;1K럯m_m~jp ^N/ho&y6\yrm[kV- l~nRjlnipy3%6lv 3b\tf'l0 W'(E ?"i7E6kx2~~=wѺؼC/g%C޼vh 6x{rN.bm,_VtڭȻN E ~%]ly,!_` 4'#^$Ёs7^^zo0YﺭQC^$9B >s( <f5/'ɔ1L Nj^*89D’geNHwިHt!M,H1Ԍ) ~Ö|7ԬɚW^ 7*΅Ff8VK!AXBt;8zy@X jBeJ"<\aSv5ȶ?< 2iqψ<) |W:ŹO?ӡ;K:ʘI= g*YjC;v| 27Ff,3C4F JRWd*]SJ:b̂0:ϬK̛(kL9sqfIV =_]zV7$;~o~%m߲CEx nџɤC= C{1b҈04X{C%kGŲcR[uW[@[uCHE7P}jG!ъ/;D%u z /Iw޵T~\8Z&}MPJC)q9!e(8Fy"=sJ$9>ǾLH^}j\,Ń8vrOoONqӳr;-8 Ü.޽^"Kd@+:o: /KȞ!p xg,-$F^ 4җ6CXѳID>ZAtG*@VnƭGםqLfYmHMOv|܇@5gpKչE|dȸNjs_l_N?"l=Q_4Jr$si^Ecl֪~iS&"#Sm&eKh3^)#ɸ︽t}tn{Ho;({jyH6KɺJV{Byqю {葴mj*h>CP=G`[!HPh#=elSzPZ~GMy.R{.5:J碮\` G%20s^D[H80Jt#v%y $-#.ڜ!' wC{K$*>G3b>R +MBS:1HRW B&3Mf1IȽju@2!gRX`\uQEJN`3.@ n,LP(rZC5D5q.̎.@vE НijqEY-B}TF 8K }HdȅϠA2 6jY49an(0_UVx#iT4;A }Gνnr#d^=ksO/js66vRۉJw{qGa$tj "KyGiLI(oRJŏ.v}L[9Kf/HE>H'PÓGk|Җ0{{!>m7{AJaāS;nv³l>XD]*z_}6\/}|z%^Wxc%03jC{fV9̬k830r5-'ܚ@=@qcv7G&>j$׽hsXG\`6P2q<9 c4,@iaӥS{t:$bP;a.`IԢlIq쬈" ޵ƕ"eMx10OYY$nEX=lO_%Y[-Gsiu}H~U*VwSffYe%6lVm4L,̂+*:P[}G: 'ITbzT˴>ta&U֒V ]p):VJm̅Cj;%a }0k4"HB,uf<]JJsCLZ: N+H [ /@ X#]k5~IDB_?~~OU\|Yg |z~^#$aV~jroCuH\jLU'Yݖ`*'&(qF63e'x~[ՂxAC珷..kK.sd]sfF!qറdN0.AWSpR<&dAq jQ;JЗhPۗD͢.|w='ܫ.GL5˝+ص@ן{LkЗa~ݰ#>ZsC @.sekeIae6a8I[YAV"ouQEM[84bϲ1w͙D3k ]PHI_y RhyEzr4GKruOCz{/bRKGS&fN=sTy.SVkҹf96Xm` yŀSll#!oeƐ#[m䪿tpF I|@U_B Z_>X"*J7XȚ,A F5XX)%R{;)/O,.Idk)Hp>5qZ}0^?މ'FZm[ZCALvv2v@\&iFwϮ?O!?w'oFIsO YrBNhb d,T\2V)mtNarX[\҄lz~qe}lwiїquUO܍ ߤEX$ȘB <X,|&nGV5̷&o!r G*2joDMԒ:(Z'TQj7URq#Scl'iu^90b&̧/HpPCa>}Qy Fb<{9[ȢCKHQrSŸd' {室 8H+{ʽ՜[ 7/KRUu垥)<>oRhu}^+Om;۰0;eS_RJv9^z87ooLx}_ c"Y`I߂{}=P^fK 0p0rg_Ϫ&F>[M~IfҲs(;kH&sj18P.JP` .ԀQ{ Ig<=,@2 ̲h{*}=8kn.?:n]Zi/MAC- Zk`BNkq7#v86|%5BL+튻?;6%6m\䁹(]OYRM;UN;omH\B;9ig .h' wdlKt/R^{*Q:?yǷsc/2<|D}I!Y{mՊOۨ\%Y~zr3z,I,fmZrdNܻmcNMs:~bO$df4}׬fjo2Z-YIFkֶLOF}0_z\,fLk'Ȥ4jfgʲRv.0IN1A^F:= 1.B1}N*b0C9c$f^U{e$ j }?kdQ@>kjdI#;iψ{}.iù0Xm2(Q0eͼ?{d xl${4(1vP7ČKъ_j7+u"Yu@ŧ{~Gͽ^_, EہAP' ~;/Fg2ClOUX$s=V䕈b^\$q6 ( I4% pˏ RCJܖtVn.ts+F+7&gj2]>sdsY"`1KbiL?1˿nRuYpiv{v~?eGs[rd?E;YYշy< ?ڴi!&~7MoZfߴ̾i}ӲnNP3$ i- Z)阓QZϭ;k^ϟWпceژ1~Q#+2dxr&Ojdz53_=CCH?2ev^]c֭uƂMNjdk9rKVi^Hc}bG*֏ځv 0V,bclP#J((d\" uT-xiFCϼ1 US#|B&Ld('c7+ RKջs c6+9>Wk3Xe\J ˩X Z޻eo޾yn 5+{b$k)k0!_G{\%*Z$Ãnx: xvނ1;q{ SZ=Jlڅ)^%%QۥF+Ia1x6 B:J"@~ϋr)sUZN?4r}_,s}y}isܳJ`0L ߒf zc e>ٟoFQ=߿x\O_" a/Ȇ KN/s.o?}:#$i!ZtI7'gή><80BB酂1&ŧ7OQyYg53 d #X*s3`JRiļ͜kF/{Qd}H'_ k#KNWIVk4aOwH(lZ"ąUUvGx_j* _XЪQ_!Qɔȱ^2m/KD Ss-])Ґ(S![3Lp2*M+:`/@I 366-=cchc3Zd|c0ANJOSa a"<9HJg:0Cц D缶ϑ!1;Rq⃜Ybπ^ǂNfaL* ݭ.jxx7I]r<{LTD !,Z:-2ΖWwۘ|SZOb){>dtm2aJO0<0Ӓ3eJW%8gʨDW٠uk^ޙ.zlAaާ\dv͙s /Ii'R-ktU;*p Xj+ӽTl$ E= \+pyt8&ht4Yeؘ7ٙBwY!nrN7B+'WUEzNↂ}ُuFJԛ.aUuM/n"t+Bo4n} w#kSJ#B>bjL'9srYL0 ;ǘ&% .8ĪjS^R d.()X\9`4PRţh9M_@<_FZKڈ嶋 i$UzڢJOs$vVZf8qU{t9Fe9!T]SjWjvfy8v=kvCbL{Qj76 l;{-_IHp=aͱچ['mWo;/J2gDZS$Kk$A${G7.NJk'q]4{ (;Ty.0eAQ5M@@Fq #I1I4Wvbj}] ?/7!X i xƉ 9/r_fߗ!h3Ʋ 4'Wb. xgAit|ZMvOnxӧVN T%x-'0m;7Q) M9mڶM79 P>LY0Eۉw ڶjjj>ēiFeHMj&lY Ke 1E)9k" HNP&:aʔ*j*7VSGejXE:W//$\qj^斌ի VW_//&u*/b*0*?RM>faҳU7E*Oq8xVmsjhpk{^ ީ8(8%e2jˊ ^M(M'Q b(랶tlw]V}ڗ涓>[9OwV?oUG(%pƱt)5ryDcQ/dpT-jmu`6nKi>f&0^8==fp4nٻ'[hKgbCVLH|"h l O@z%99@|^ZmdQ3$Dp2b *ATɆA`tl}*؄U1g4Qxt|V0+p2+%U\ DIkHSiA;jn h|Qјz~ѭs7/+Q࢔9[JA*#E@a41?ZsBgs 4OX^_j7j"/QsCB'&bGzK#CE&N 'D<~!6eQkt;WX7e5ЖEO: bw{(A@Tb>4mo;m}:oZߏIj^~쉭t>g'9~"["7O\W1^>%~bVZ \pZu Qi35bp,LjL6u:䦣W8{ܽs-&g7hr+4Nis@+1'p~T"Y0K.,/}oiŽۤgQf! ځJvAc@5t?:-9?VA?^trS*;4톨"og?i۽Vq~i|ҁ=M^R{.W@`U ӳ: N0[6کxC.J96?֎F坖L3#_\{NLIWޟ  tIl?ΎAs lyru/˜`l*3A7cY)dYg;Sej{2B->"[dYMu89FLX@/N[,I b|0a[Ú%{՗7@rͻ;ՀB>JarKx6>; >6ρidQ

  • KALs޹ociKECU:q&7'є\‰EMR(}±( M^e0 ~~>r)CF-. Mvs`,!0ZB똼 2EIV[AZA-v9&9 C yW۲p\"L{YT ")c/hIZ ,A(goϚX1GhRY҈C0tZwnGg?}\wEߏoCSQp/>pUۧM%e$YA0oF~%:48KF}f1+b]Ne&{qτu;R<_O]rLwNPN>z/9;$ij9W5fks .p$R_PuZc2TZ+Rؾ!SH,mUcxXG(j.Z=ŠBZu8B)Y Α950D9I[oo}`l9x计䄢\C\ .6V;c 8δ|}v 봓F7vrY29[``6"cq~@I|vŘRFhXwIfg?gx}i;[x<`MZ6qD+ A{^*Z;X,Xm߸Κ^ ZF'fEx MYv<ݼ̴71h;?" Ժ׾T{Yy! ;!Hyk'ޣe Ή/סr砭rYGTN\K;_×*4ZQTscCbʂjVje'=:_R#bL[2v{n}+i\B4OgC9:Pۂ tk)c.HP`)hi%tp%Ə(dq }t@0\ke %TXV Rʎ$h;*pXe[h),zR MrM4-\ں%Uw+ɫ}H.9 9.g"qsj$GE Z~^VŹY,_KA(cp͚)P9Q9 ʙ|s`BTTWq⹒XV€!T539g ۈKx8[vN3WkBkH z|DR#f+q cϕ!݇dq3*B cq9K(f9)O41C%i D %1Ya%o0fّlFi1'YK9 FTMD%6ֶA֪t̜$ sX0w:wf., ^])zxG=3{~Lhג>ŐU>VlE" U&ULV: Ɨ)[x2d[&4TFǬŘ-Ec✋bs];%z)o_ 'ӑ^z Ӎ7`o\6h-gݾIk3ʩ~t -M^i<;u];yEYpݬ8g'u քIA:'t3nΟχk3=OfULҺ' 4BTłav"XŰk,$,xrںoOzqz5#h`07mo?F#5v]jZ3eid߶r -J`&mw3;[̖y:E u>s!! Y8ZG44#b- #k-|s04g qENO^fi}Rjt {B(͓['"H]+$Rv"1fTM|]wWޅ_yٕ_+Q9G+AVV`0 obT+;P<%{ ki$(fwK`HR Pm,u6irDL"L sFYphU?m#>7*3 V©ţyƠ" c-1Pe-0HDZE5!b(jmc~ K) wW ;qhQ rόNك3Z!p'2|oطRGjLՈO6c !xT{E9fLS]"Uxb2~(C^)t?SWKܳgM&w0Loju3P,q5v:kץ} YAk/{I^&/po5(fX͏x q?Ӌ\$x78~Xxhdd &`׏lM` *JNWЏ5#j a\.RMOfggJOO7?F oN#{8O;23+#7+$ +_p޴}c 7mUF?5kys"WQɚ|nEx4r>MȂ>SVğ=[#3x t?<> /;~t6|UD]L0; 㝕e.%Le)~T]^g*U7|vq_^EU?jR_z|m.]&9gf`0^G^~OnXt 6m/vi)]4Lo}zs7,E.k6n73gX9ɠI`bo_F)E;(Pd%ͻ8&~5 db%,&lduNYt\-^GAgz=l5O"@rjo 0 B[|5|t1{tV>pj|,o6 =eޠ헂vX\3π}η+ߎd>|PiHx ?>}8:tppV޸{z8p| 7cxƥ¤;u'drwݛ6ҝt;k5 :x,l۟_ַ#J:@{3 = +oz(wa7ȼ˾NY6ܻnlw4{ vtwj>{&_Z#s<o_|8{&'7Y5%hsΉH[[]Ikg l.pj&?Ev=p$w$ނo*qc<=~lg!a3 ӯ[ɹd`[O ړ~fS#3eڽ5zC M^oYjtY<)E7wwOȫ0O_sy=}E)/^7_^<5tn7wc; ΞO܏Pp.'ѿbKx?/z1]~oFIs@& JDϓ,M0i[GΟnCFOǠ.}=;Vwl~E*N* o)SgK7ͲqZw~D4L&{K&2{?$Ob.'IbuFrX` ?Sxcb9-ס81)wS 8`2K23/˿OczV,7K"86-\jLYc .4ɤ~Gf֍-oJ3T!#?)zEw]WA3%+-w~0_=K@ܙtB!!XB3y[H*ٔ1,!L75_ձC׉~egAOwӿm#lwmnx7C\ fj@'FL+@k1X!vL">$.meUVp.cYcژlQ͹2CN(!4OFgF. :s+x6g@KO^9-ߗ e7g-|;f\Gi1A3kV~+6nx ThuѴuf2Qc'^1r]\\%y+"TF,ɿ,]ެ;iFmw NL' A ,Zח+!zݵ# H< OOH"!|WU!*$LkM 3bBaH Q,4:!/H**D"D TD$Ldba4Uz㮇O5maWp=w',ɓ-Ӑ(%WO0JP,䔇J 81n5GknpĈ2zy D5F!#@@P/2Ah¤La۹HY13"y59ըŽAco8NCo_6xשZB`z+E蕔-]J/~?<*!]\P]xk 1A2AL9Z k,D5nƠ#BGX2-'Z!-ܐPPɑU"qsjF%H=q{SAIdkZ8 CH#k cx'bRca8#XE5.A5UJIMOuг3(GWkg RY0@pd:L^V'?4\p3J2OZ, D@HZl\,6#"%ؙOH#4q̤Dk,g9B}<) 0ñS"T ])6+*vX@$k{ Šݔ6 lszta )$H  H`*57GВs7Az4ELވD G匭vG,B #vCb"_-FD'H`5u놵O{1ʊ5/gX;Ɇ3VR.RR @e>v@{~F[ndһ$7uL زWBdI4> lah2^))ŧ%B%g+ȒLdM, Xq(R%K(UDe'\*4J)Ltb$0z7p)ɤhPl- P/`?u 5z"+jޗ8iAHŰ>-uD3 h{V2!0yFXĒ1W)d/QbԖ1&; !UȮ=Bbцh|ySLad. V˛[-NZ~H+v>|PDU̇Lʇ=: ']]'G\>n':V)PXVn4Qcu30&8l e∵X( 3{V\9Q8T z3 ui씐aHỉ*41#V(JxqZ-pA14d1o,)K?u~gݷ1y|ד|Rr9JhfrVSaƘ4?̟Q'Z ո:Eťh0B;wu\LFUWs9DDqMQ0FdZ(5#!Kfxinו%?8=B.]žKm TRݮ3RIZݮ[nb)\"pœKC?vcJcuGc0$",4XhA5Y'pG@MI/֨4P$y'5TH){5Tf0Z( %KkcRbH#HP"-HJYM.\å,FQbwHZ CgcDf13pH;iabl8+5Be`T|ZnQBJ]<]vzQt&bcjЍb45rV\h&qH 0-!H!Gp'$TY$ cZt#+,vC>p%H6/ ~ZВ,vr߯zHI!%g%-_uuUu={ua!Be?Z+wFqSW,W]^~UfR8W)/g[vme X<tâNji#[u+{Z,#j]p5pmyMuwpv'Pn>4/׏p-)N~jhpDVʃ)}GvD#hڭ\ևp-ƬSw[9Vʃ)}GvD 0mͣR.-R2)Я#LET` qgS 4sICV0& Fg#Lpsk ZN{%TlRǧqI괃h,[u)b8i: ":ckPa 8㡶dzEKYM:kmTn6Z[87"|0r}Mp}U->.>7jreDI4)i瓿yvǧX݆jyZ^벜rv\ޤZYOfӹ nyc&!Ijce%LV 끽e?݅&{b/˞g=WFZ1]eЕ Y±TP}f?j\G1iaR=e1V87doɚEDؖ\p_zK@iQV:eym4Z}oa#!x' viO']ʗڧʗ;p]sGy8L+!WCY%CNElVvrV~f&'r-%ެWTV-s>\+ן"k8Pg"X^<ԭ#sa<edǨn#EX~cYWptG6'JM(Z"_ ]؅lk[TĚU?x.USICWf3nAK _bJѓbo۫͹GLk/ANp;dR6.~Nk/2.E2ɟԻMn<Ȫ(|GY_K+)-AB^S'oqh7I$ GtJEӋѴ[y?Lևpm˔)?#Z\Kz~. u ^b $['",w H q`(Exj7k`xJ a:NCp$OA^NiBDII3+$ =4jKf@Q"wSlmEh*VGđ"DU aqOesU]Nm >]v\=af^5멠Qʧc"BFhJ~zLvUQjm3[s4ERƠΡo{yH%, QETE堽 C5m~=U1|+ǛfTyUևkFyoO=Rx]]֟+\Djv;mTՓ ǫ~Snͻ0ݨ!# MvPr2JC|cnoPJ] q/(JP{u "wWmR+b̫ߗ]}ھym2Vuϭks -^c;̧3f5/b>E\׽|w{,o Q7[3 C'6vBP'ÖdREpN%I (J'( n(QW ͧ,bB*ƒ+y] 79kx&!ld}@zXׇ3_o&qvXn$|>zw@KkB>_5SK4l27wz'ǧs8)$M7G%?Nı>czu]~5k<`,B-*S"v ־ΩM6LW˛L_|Vo.//Su?3L?Q7]MxtZBzUױÙ 6z=w}+CM$I7_1`Ϝ,{ YͰAJH+ƼlD;K@Lt;w{Ya/ 1)992G`Y] HT-6c:7 [)vvIn{2ȣ!+$2Í7ucp2l@ "lJHp1 z<$H_FB~5E1R@1<ϓGV*jbvua@,N>`\J8* 8Z])X&Xp s)S)TrwZ212k#Rp[# F1KX)FZe,mg0*)0=9u- 5ap i)ѝ[9VS YmĆnµHsD YoQqH0 ΢QV+Nc6zqّ t7or^-X5Ζ>nWlדRv3&#S øM*QV 4cVN8+Œ +B51{}/E/ u# ms/;wTaXr*0ŠKB[hcRNl8S0JVCZbZd5b\(*;62Qv3)ݖh+Sb ݟ+y;.4"C߭=ED |ؔ vK'ARÈd3ا2ޝEв98y)4ѶB4癔^wݻ?zΛ]dEHɀtEGb|\WT:'y6LĔI'z}8˛ X_'OkIM"66.ڿަu&s{|]yl<< wI4%|D]S47pclsԛp2d!^X=_Y`%$NAaOu-xAL|6 GBoK_t>ٌWgo'Af 23_iq|{_'?x5MޛOO֞n&}[[r6?;x kPԦ6(՜`C);̰+VSES_!E.c N1؇9%N i<+bF;K(k,I )R1" R' 9ɧV/)iĔHs|Xq8$n?MN T@HP mUJ`r|6Qv3u qZT$O!=pMWa4tf,i^<Ma uՁIR$!Pp@؜f^R4ŘGJ x&ix4L#s"0é!jbR8F=>o;GsJ =N8J ]x;-pAɎbE8-XJ2<S F~hu8.Ŀy3/Ul+kh%Rxy~١jҏ&+EH#(eMNc{0i|"_%WGN ȑ'$["I6xr.Rǹt_B+3⩖Q1ąg)MDs)J#LੱQGN<3 jC6hdv*\ؾ4  bRq:Ĥ\[ƧL.+gJA6xE)_] JăIp]^{QM@nt>PÏkťrjGlݼ`+*XU8`_1q0:n71u&jϱa漷aGYp9UDLٻFndW ,rO L˾]$;7Xr&SduHE&LjW,bph/"yo06k!֌b个S'i%0[/лt,Z& 8FAQlrllڶrDӖ٤eBKh9m'K la!4 PP:[^#ғ\H[+&*Px~vFy4Fmqq}yq߇JkQicLVֱTNJ8gh;VJ9z&6RlE.]1cNnW=W1Plynd4&VjZI&4eų6N1>XPGt)Ii팮AN)kSv~2x&1N uV>ݪ%oX6F o"`3Z24E)9o! aYeCP޳X;l&)ѷV@(:_[ZYZb' !K`уAadD#}$7"0Nkf4Օ7T- ׭j?\ulJ\{%0JTVKe|w;h~ Eg@ p=<ɏ2{77\AxS(r}]%@>=?>jc,qN ,PN{/ݭE7O\->/ y'PhD2dXV vx+[_߶u[Y+Vį|y p_/-$feЗkJATƮNh;=~JD:B!cʚ@.hy/Sv^ sxZfY᱃pg0e2z6/'I,V䥣 D/v=N-bKgT1%6좩IZ:3myиΦv݈ð:ĝ]VqRcm@#&XivQBn?Ίro."FXc-s6 Wvqd<6]?h9^rwKK~9gݚ#o6)<mx]И޺Ј[;v1G&]VGjqK[owd[Ux4,a[MV;)v 73t%^Iw~ m0Zƽ:iG=Py oN k ,2 ˥NK#JJj/RCEBPw\暤@XX ~]FpA ǤVc]1*-̈́Ѕ/\)xǶlv.R^^ hfRmfqF1+H0e$ϡ;{peu/ԸwM:QvGFo1yRɢꈒ & ɖK1sv} $QaI>D@{՝L3S t5j)zRqK[e gKɫU 3ȒsGuUZo$O+*@{21Y X,8aGyr\'?ѷD)cAI^F3L2@_mF~>nbt*:[e1iDZ|?-u>鹑K>a蓚?=>s5g0!铟oe3 7~]ӇEu8-m߱Qo_^~cNe00b:椞 X-6gV FKxb5GJ\"YEUOm.Ҥb2kY%ީ&䠕sIoUӡF'flODO㝵 ݝs;M7bpk@D_ ?LJNhII'Ry|5$֟靺"lyylzFNk4myř ̩($^٣❟l;ΐt5] PAhA^_gtFpV!!ͤU"mF+%fVqRѻ#.\$ Lc Y?r̼snS(Cϓ DdgȐH.;=Q.jL)aFyY–,-gUQ16.Qr5ڹ= T}H"NNRVĚ$ '(FbFԶ K@_/s/2_f^K85v2ԷûBBkf?|9gG3~n.]xrəaGJ=fQwº%UaSE1f1y^fL3x9&/ !8Y:M?/LբdIyumrlG|*c&ϗP~u2W}Lj~tґ+AHbu;#!#Ң;&ka#܆a8ׇ-7a qۯJOt_p^HtX`tDy!m c.L!Rk-E,E$=\ĚaCb0.ƞ?ϐd=9|\Tgtn:&P!g'gzǟaZ:3 &!i2i@gt`8(x`ƶˌe%`qZy툳o^ ̪͛hyh !uKp,p!"Ucٺn7:=jRPbe*޺b4WMuKfjT[f/ĉ<~ ?4ݖlܻ#%0]jxH]՛8E5Z%+ ^r2DSPI9JQpk آ^@uS`XLvZ|c>n)k5Zy@dXA5^5O~жū! Sk#pqy:Y*G/&%6G;OHdrX9sGsFjҚ2RC.Yz,2cY);&o-s%Zq}*Z`JۉG2~t$Bޮ%rbC6J85b ,g!|oD̎w4O*>l #Q7}. R2)F~g+ BLewP`G/ QY]> G/<炫}O`\Ȏ3-fz ~a[RďMRZc#(*L8]1 _UihJȡ\B!98Vx&2 oFr+g۟1ta:[U Z՝vw6k`~M tUiE#C?ZKM`vk'LF-LdmObɰ3^& "[< s/s `M@Pc/f~X3%+J>cT#+Idu7Dv~FYDc(o>'_2Zu(hFٳm H)s )ThVS hc'uק^ *t4Lsj$E,—9 a\p0ѩx*]S{1`%[GK'k]p{wZ4?fxjex@7vF+Gnbچ7Km<^*[\^/6$x뎩]3K? :&=cX!gzȍ_d9$X$ !vvE.A{H{f ò.bzsԏ p Na fp*%{mzA4)Z.j$owU`…AT*ΛR,s}盕z/R\WETkgZ#햓F}v)%&ꟾۚ&bzВQs>ia:|h8C'k=V~߶K?nCϭX7})tQ8ڀRH'-{- 0(DB: OYq1u,ΰtR(:AhLH v Shڬ 1A BZ\^v?(xUii 3UeL(W XA2Kt iQiVyrϦ6oJU h|(r~-@$L `{+hiE^@Kn#yN ,vY1eG1Z,JL_V4kX[{SmS7u6p]A O[k RMl=ef.H 5c{nu0K!;I07ru_F5[Qbc!7" azv]AV{Vm=qs>tCt#nn<[)zy-Wĭ쫻vp##*"Zuݑ :p̬'8j.}\cYH>tIB,樖rD Ste'l{!*Ծ1Cq[ooڿ0V(1p,+:U2N]o.: ;cF ߮a`;XeDOm@|d#2/~D'4uyaS -Nݝer5g*F -8&;Q9f"q7"WcN=" /88BHt+{i)j@ojzZ-ْNph RAs7%VJ ~r\ouh1Qsйgl0W y@P f3ڔ*p$/uw(S<|)7&GA b+&-Um4$]>ijbOid}В>$ ;t ۘaB58Rjz:A*eKb&ssNϝ5ϵUZ9Zt*f-~AfD{K2|]178##~v85^|ԧ[r9Rޑ|^Q%&oy_6 BğvOJ 3/>ulVwn(eӍkc=\7  u.Y'oH;r̖[}'vuSr;HώaS 4Pcz5Vu۩(;H/FDzӳScN35'4~e;O^),DZC]ht=\ X>hgDqb[ nz6VRbk<[19 rod-E[}ջO7NE1ck#(CqO;LN D=0dȒ͌76ci!3VY Y'+lFpZ#zmLHv%< /-5ؓ&D2A`ccM;Q ?32s= <]TGTdhADNVI}io=FVZ+meS=^mw+FЎ$;H#){=:@ٟ ?.c,w1@Qe#bCph d&tc:%+VAl}~9N]5=Qf0S 3Aa =\ƾLϱz=GǬKp[O^gJ4egіMУq9BmGHlϼ )+)O0gƯw\Cā#Tyx P2=ܴgc4!y`AѪf:x+]/Fع_cJnʂJkt.>?}O?!!Y6⟋QJG3h!19A"H9"=yCFǺ-;:11JrN*aeIF9dhS=O1RȪ[UUM^ 169[AnE qo567-bF8ćWz+b7_OE+&2FH9֨3]oMoFyS#}Ш @Q;:_5h%#Dѝ؄$J@ ew*IJY@e/MHV&]%ɯJɤ$ {+Bxɞ}NY񩸾1[zˋxv2Et7sA%μJy8*qVUx쇻raȶ&B꙳w糋w_YO~whwa H?ޜ›3_xs oΪ7UYD%'glˌU@:GIV\YIM\_p]>g9A}ej|Q~(_:Kg|̗/U˗,D R-r4Y?8y l* \"ԊRfZp,mJ>|gQ&e(k|f-k-<_vɛJ“oOo~s/~sr/#W2 <Y)I& *~LT.KCyBuddqZg@`;Oa B`k9͢g SCJOClk6" A-)MuȔSJ3AӧC9K!DR$/ v6RUXY2ɲ'9 \S+ Gݹc'W樠,UX vVl"-EXQyIQo)#igMZ"g_ aIdL!)Jr쇄BUhT{l\kq݂+K.L:9Sgn%n#u5 $8+@jR .eZ+C֭eZ`Ld#~ ##\ڛT(}e̢<}U-la1ɠ\NPi#;u'[O58tJѭ 8h,oT 8W Vq6ϋKM('Pʡ䣒;ÐPWϪ"[UF)F߿ޑ{X(/%(A4xN('C8TZ4!ɍ9 :!' KX)6Mdit%(eY^*H* 60(,UHS// | ȹZHTI,M.NjtaQĀj#1ef%ˀ)USm U*PRd” fimVVrq6Zxi9]%H Xz]p'mЃ䑹8kmnƲK_fS~JZɇuJn-o*}&E6h bKlv>88rSA`{]*wAѥ OECAaϏC6-s׫&sXvQ {ѿh`KJͥ~6׮Uf3bB2h8*:dBʩLy&ABT0RRQ ˓,I3ϡ]E3âK%Ď_&SSJO ouŒZaɽOy*mQCi9O0\d(SSH͈DF2*2RЖ9 ^ո|1K*a L+4nL@Fr#w$l"H<$=Dg-6~4Jւ216TAa`^ lɛR! IVޤeQCr4f|͏C X<2Vʥ 7V&c%juCʺgxQ ):16Qm/?ۑv| %nr=/HO]0?|G LEg槾UOJvnyG$.{u_==xf/Z0{`˶_ݺϝ T16ribCˍRb@ ԅ>i{2vM̫=mfO]S8ƀyV`6Ǘ\ Z}?ͼ"3S3n2+g˚"䅇huqeFvp6Y٭*>Se[CJ,[VG])S$ cO,p=:K؝3 `ɯkK5[9[C1wq;KmvbWF3}ǻnh~a{q}qE.їn59zˑ//{9Ŏ qNy;1󕐖5W_{3B3 ]A@7vo$ ?l:{#Z{Iγ_='y]2繌W:cK2;Qkt#K#tSt2.’,U+k|)]ݺQߣ ei DvKtVh㗘C GP[4hW;l#FM8fX PtQkEȓ]Ѩ_IK 3Z 1$,"QP'IRX@@Tg+ g20<>Vy ՒS~9=RoNxeD繊pIfj TGg'uKOYAbizRuރ'=y!Z)XzvN0U't>Lvfq>-lrh:Uyf:"䅇1e`E9I}9;ϕxy ޵?~]vr͆}.qz[8@l*woVv"\eQЄt>ّ|t݇6Ei` ad^6Uq GnJ4 )~Wm^E0*Gߊ~l-[w0Bw覧èuUCçv fL/PH"û?YBOaP'dՌ77N W~qt/!H3 ,Nx PH'J&Lp'HEd;dcCõaf| e{>M5c0ŸF0F=K 7d^ ύu󻟚/ȡ:Gvw9fuG y{+p2b}}N"+ s1=@sg `"8]@Hmق5sɁΪ "ԍML@ѫwg6 g5n[/Dm3@`P窥1s?8Խ{PnmR:B"&s_sau's>"rm+%'n`1&D)BSCQS` b]cSUjAZc%T0YiES"(a$ dK)oaL"1mt%Pu9_f4)N8 h͎ 5GG\zj/޻Ym.Mf֊osW#5sBmm"(fY؎\eg,^c+cpES˜Jb´d:6SrZ^)SCL#o;r,IXr˕LBcD_$ypW \ZS1N&q5Il~gn]?s[l6%BPUd2ƤV<H L2%@8)/n[y~$ NpED)hJYSt]Z;oו3 `s ^6㿟M,VR"ƉA! S%)K l2b; R-fL3B~hQ " T۾$Sa A6"QJUCN yUmUL\ٓ{51dd~ၓz8HOgX s(޿yE@ "ߠO_.x52G\9he)w`>ʖ*x2w»SE\sƓǿ~g8``wnͽ/=\B{S{%Hޏ[n݉m{펙R$yJJK.y\%/;'wߎRmGHZ̞m44᫐Y%\Q:nC`e(T]YTf8S@Llu !Ql,;`ȤHCy$[G14?{^N G![mBݪ|3E_"7{?姠._[(P}U/9,[%yw7 HbA=xy^s!B}?.\IۖBRsaH4<["0rs¾g:" lg~" e4<3cИy`G>fl}] &g'Y,v93$%g1 0Q?S`<u7!h?YXD]#?_` Ah3=li3kMePz-1]'4 w&n}8)ϔ™PD^F* F&9 #.em w Z{잇 teQcU2z{,ap;ri$l.7A)hM5'|TSH0Ɍ"L f4KN+eLY($)a( Xu"LL7 ]>-B/xׂdGKz~6׮*zӷ`;٭\MG&N'Ofbdxuy /\˙NjH}4I+Ўa-fq2߮XSh7Ӎtx`!皥$PS#a, DŽ4N 1C2M f;ύٌs\5_HO_CvAc5NDJ J "Q* \(hf'!Av^4P* g 4r%NMXK 0q_SF*aJW't!dfiJ @ 풦r!(%M4X*w"%{Kd4,L6.I<5#s?w"M^1Oi*r^.~YMxjsPAxI Vמ7cWGvvbJW2X$Ha3j V FeЬgRr۲OA2%vpfH)͔DȉY;ؐ (O4LS-Jm$)Ϣآ؄ '@i> Ǯiӗ´ټʍVR^KsU=2O^ U M6r#"]> ApnXdGvrKldSV*SN@<#d`K&6&HǼ_@`{O%]op~"2,W&G؞X9YGGq4kWn~oENi.ַr9.΂vd]`s7@8&uu!NmQ,8\>yAbn;㝵f}aK`CQF6ZDB"/zz! yOJB  7 *`"\, ?YOIvWjcVԱov^{M(^#oh'yx,N0^= ;0mK|1aIMa ;:,.a:q;pDTOD[ԪCxD!T ISnc_=Kd jfK1 ~[e7laFq/oN'}kɻuɁ53Ox>5&,̣"f 2 r"%ewAw\r׾+(nS# q@61=yXiqfqܢ9 E4Sd-ѰL&#,*9;IHigvyza|b'#QbͣD#sMt8Q.Fg“}b־zmuO6!i8[nzQe6|&dSTdD>n [(.6bb`tJ`2݆D{ŔvvieR2kNϵsw3Y,o jmbK.N>|V_}f3/MӳwS$Bôeퟄ@ f\m0SIԤ@ Gj9Zcw*O9 Oz /L Ms%0_F5,$$C'ls4 +N>Ҝ% zz|j,A"i'dxsCNq>!g<']lr7Khˏ3zafpn×n[,{`ޓ%s5N?P%fĘm3fD8#>`_[9IF`ڹmGo8=K7@#`L"pC[ /i.V ?u<ד,byL"Ih6ƬOgyэ]S!9ζtgt~$yֻP9퐏HnttjobuЍctWn82!O;gB/P<^4KV=,Y%G! =9;ɩ!(F8e%B*J"*"%d5cR0_PEjuQR \8 5NqJfI\_j.Dy=r%:pNF]!)jֲ &YuŢՇFaJΘH`e *WGFD>u.t-}*1;Q1t U|!F~)b5֮sƔDDG8֓b@ݧgyfH*HaCfS!.߰0krx&d_#YZV*a)!N #ִ *d!:IUC|@%1Q*xE !kE:ǔˆ0YrD&TTS|(3+P^#P "DdV.9BCKL6ר@#Ԩ wZЫx"ՄUW w`6"'eW ̈#fP6\w ][; g 6DlIW-Unusg>ALsZݘ )8 p.:j2ݼ8o&fyf 7Z6r7ܚ_G2Fo P #׵-9d{.6=R.ѹ(E)_9ݣt{Y"MQ&,N>"gSA`vlXNbd=U_ߐ7s &{#O7/jtõʬ )x*uwR=t۞ۥ^pjΦermn8ru6N(5۾*v@55\Ͷ8FF ب</Dպ/z9X\.O[eU5 t x] J6U)SsLu|Sso7r8̊LJIODBa;-OAgT:nC+N^ ޙh+;ڼ@-td+нᆃo{LD"g}78H1fzi98 5DM8݊bF6,g;b 'Pg$ꯧW .X~Tep&$/\ʢNcY!~9뉓 ef/xN!"13Dr`^aEl{cq_qp Uzf~h"_hߥJ=c5O7pԗ {&.Q7aM,1ẍ́¡B)5KnѭWq0^n nJDd.^We Q(1ڶ_2`@̊ $đrnD*I:캬{:ok%@e\V%kUԈaL)2QDaEUZ ]d%ڊ8~d65ʉ|1ҹav_'pOm]w*[94ῴRg8-Ժ6ְ:W_Z_}i}՗]_,7W`ue"4.?%im~kZAD4'eA0a!kk=s媵 ܶ:LB_t>Y7C[Yoq+mVBCBѝ_}wiCK]uW%OT$LBbO/u%DE 7EA+Y\|eKXK4kTc4*hRL9S5,ky%Y"2zlI>lDž ܸ?BBUbxUTR ANjT"UP@/$2Va\8a ]C`# y!R\^ VQtqa&~ L9*5WP:ķadgƮ m~7بfvҖdݭ쾄bq?-{{gڄ7ʬ#լ^Mwl_lc 6Jiin'|8Zn  ]i5_]vL'fDGZ?fFʥnm7&.i\Ag޶ݻfxZ8`mBjmK %ʢSBDmLNe(4Ӳ|vj7IWmi QD3oփ8oēv_QPȐ[CZ6'&y;f=t+_ԭtz`Ge Y@H1Xƍ/afy`5LVT k]r;c߈Ń+wSF"- s9n9hL:ۈk&+b^4|\_:1m VwB?6v_0IG]S ]4$ŶA &ɛ^? As$A@C@ۊ|[up(' 87B%ܠZUEAD-䲴 IZM-4 kN-3EIdAPB!2Aue^UFט@ō- fF$ J]ZIQUrfq[jY|3 \ײ4n!1J?{Ǎ@/y%k#/ ҉fFj=鹴n69ñfWbY2KèRJf.]L9x06bsljN =RSIKRQp5 ˨U<+P äE B@JTq&@xX *1">IզQ0y'Ҙ<4<&)Haf\$|,UUIXEjohU2=%9a_ˋOj M'p$"[{; ɧޞ ;_,Wt3O[<-L:+ԟ5廕lҏx6bOOοSM?1h{8¥CK i &R#5i4V,0%JHZj8TH ^(5@7%Gj !KasZj8Bi;GXPEQϹ՝=9D.Պ(zνsn1w`iӒd@ۏ:yJ-(VH4m~C?){xop>U !հPƪ)Ყ`vɝ훞N8\rZ:9ϻgng//(BcFa ft=6g`{ՙRN<_硠8 UPGAqL oxKIAy6}8;1Gr'#!,eg 7NrRixs̨8>; fZ0r2xG T{[p; <^czUsCrFhG; xY6@>?%c9jyRos@怠#3UG.N܂rF(k+ 8]Eܒ}V9F@kf=%]n pa>1bZѹIOㅴC <`R, m٩bgDt+W&(Ab8>WDHLe;baTD^TQbKD$qbʸx5oB?f~1p>}42W/cvVƎ{nt.{?|}|s_}Nn~6҇y6?Eh6pYy8FoR ȖcC1Oti{e嗃3|KYoBByȞ_bR5о9$ C9ugF StpI*`bn1WbRcZzљ1r| Ǵ4LS#&Tev:#HjF(Ҍ4j6˗Gujiβr~K//7i,_<¡ʐ'A< 9tZ˿uÜE ož=}`qA1z`*5Йm>nh5t􂎧<;bZ|q5K$fK# F֒]|mrSVH^;nK̴>B3\3Zfx.\T+g0Uz\1}6IGx0#!f}Uk8-(WV_vSޗ;SMa!C1jC-Pԑ:}?^G2~MnԪ|1pwn#)SEnY[Q/-vo;L %Ό܆>Na-v?D=GV*kY:S͐@$~YܣZ(Vb_ \Ԝ;ڬ9}H_\D}dJvb{M"cn;h1Y Lun}H_\DwdJ8ʔ8%gggk޳йenF8Y8<66|THꔯsJsHDp4U$S3p.n̒LgJ !uZ4!T1 /EF( L M C13"/ hN9#B̂T*,5,OZRܞ"N x` ,<^ݙ{zcڿ-lF}g55b.ר6>nwr`Hl -ŌښӍQDQ?E3WU^QԔQtOu^PbÊrq((!zȨH|}\sްKTݬc/:람L3œ=1noMЗiWS+Յ;k#M)LYjҩJS4&/+Ne69+ֲk̮=Gqk7[~_*ʼne sMpU W΋Ҟ mnd6.maX5ؼk(d)O52N71,\)^-&g=f Baaڷ rA&$nx2=~'[ 4Eg-{NpȚ`/ F(*<>Ƶs9ay+QID+o[k38/gm+m;Ga4FmWk7j?(> Kፊm1;7yc_: oF=epD7exeUvpHnD rpt/8V BF+C1"&(Nh/r@9(= B\` F@=4THp"p}Dl42pyyعFg -yLru޼~-Ջ{sґCaF%TmQb^ 4Tjw Z}0XxyPLz+548&J $kdT-&QB:y.['ez"K?[t MJ,@Ni e7S|e1_{W{̫@Φ(/R%b"r kh H YJ Yks$:MelriO%l۩ao mUby欽ɼ_YF`śEz%?Vx9 /f-=BPd&FX3MjST3-qriB6ӎI_BҾ1\:^l13] pUjgKx{Mx1ŨExZV@(ǻZx6AV" WcP  ,j9=ʓj=N+,Џ?rNϺ%xnvieOo^_tso7w'1ExZ}OoO`&˕ᱬ-bWTx3AS,V3 %m7Qƙ<=;c.O1R3BI宿wtNLsņk1ӭLOB i0M0`h'a>N<'M)ʸLa(L ՞p_%U9N)c ;;_}7-Vp$uK`{}gFWLvR2_jS;(/vN>Ÿ5{o|KZZL XBLhոԉyj `/K&9,9)P-=jl>@S"K2B!gX\Hp 7gԸ)cb208eaVÜ΁kUy9A]ˀvm1ª Ȇ!F9ow_ {\6m)j [󽅽r:z0jR˽ܫ1"fz) 41&[Cn8a!'deZ)xشR혧)FlGI5ݷ6֙5Dk68%E1qۉ-v,TL}uZdӆ\פlsmi8S([,\g eZl Jcc\iƻk lv7ϯNa4no\U~>?EL۱N܎[b9>pupJƫM1,_L+7dsºQCx|G7mMHᶸ;[n+Gשq[̃`5L_Ψ/g=ľCx'W a?a@$Й"l]6psulf}zp&c{Q͹^ӆci^t9o%V,IEAI ˨U@ șA@"OJ3Ф`YRtBc>Z#:q.R1$6Ȋ $ 3L IBP:Lpv̽lo!N5ŨǶ}gk Y֫r6^gJ$&7-6BM;:1Xi8UZJaLT ^(.2/ iR$}a(5y&m6~rGG?Sh+XL`qRf`i@ R3,060MqVʜrO({鈂(&jmpMR og&NҨ)iYrTjj 'J[DJ(ȡ4"2z0Njc8 ">{T> ;g|v7qrcCm (OiQz>䟳ҫE {UΞͶ~EeLظgl/ʏZ,ٖJNEqķSo. \ʻ$X:/k\T׏EٻFn,W~`r~ yauvڶ:)BZn +6BGqMnCxuS\Ϸ"Bϴy|b&L…1I10߰I?\HF3 32.ԧH>I* \10:߰nCγie䡽wC&ESGwLcR&ozh<]/m}j9)ӦƘom.R cLnT˾v״=]jMd-4t=i~iu7D>P HS`P_q*W7ICABnf<%‡W߰ߘ44|ʊǛghޞ4<$ghoL΀ f۾>e1)|Ly%I OHkı~bƔ$* { ~\ z]ۖ|ᾒWbjc EEӞ>̀a ;™[GDgCP%v8UjRFg R qf8 H SPmzYkp!L7WxͧaKlVg3| $5V_l~Cp!u͙jDQ@Ðc 6#bTpҘPdD)?lA'CzpJ4ԿTܠpp@QfẈA[/\<5$h)A~3XrgІ#'Hɰ {BƠx(HB n.B>ˆCÆ+L(V]d͖%@#%9, 'Vԅ`$ehBM#9eʵV‰U*uRS`#MTE*ԕy. UY8nJe+$M%P?o`L붢^y+pӠs Eh@AH9Fࢽܻ8 )x-vq HT@~ /|%RĆhDV8ǥ';w@qnOXƅ(IT0~yG !=0P!TOHhmv`~"M\D0o]T&X2sLQE-pd-Ptv~LhF e xgn)u*ؙ} v77;H U,jHYWQ^U$/EYW%XH*ZRc0 #kT6!dt6k];X;G]noYT7<$SuT׭nu먮uG}Y% Z%+*i8iǂ)ׂ)!/1V_|\ŇVc-spw4NM+m%D>7cPez3.:oX>>|^ՍuA /ۛ> ~xNxAj&+5 PL(<#S@o|4^<7,XMꟹƛeKk4^JߍjkiV\92k1@Ih!P7 }džkO-_E$kre"VE,*( .URP2WB*df9@Id@)Snh)cg`rA8!Jz2FVcT*r)FN0GQ7*lqSe0BvrHB^o6LwV+ND!r<mP]-aGqX5"S$,׍9B Yv0jR?ͧ~r@ 58K kPݼ<+Tu(辞io@ʠ0E{vJ)1CZ}2ߗC9m,+ 2"*IyFjCDoj&0- CDx i/a#3Rv^;^*(d0ܞbX(`Rģ@IQmtT_u## Sݩ|MuSQr¯ Z̼a72O<@@h ߈cNumEb W؂w[k?& |.a9 ;wf#6}ы`W znG H6Moa%" [$}[fZjV/B p XPft} N=|)хlӕ%>™x:)/cv{Sl'i#%* DFZW2^-ͫgՎM^kTFk m\80Qk TM&u[9^ {Wae,9gD1 L':FCnH_|^=- sT0H}$?FNr}aЈoFG gD Օ1%K[_l^;6%?=d86ݲ5{yxyd#Bӟ)Ɖb$cB.]eUI9LuGCH%ެMiޱOo//;v$̹ϓ3'Ybk$%H֊Xic+C9fiU ئTt_lv )&A/BJ\nAvܗ2:?/Prk&O/ `?p{闇g`7v; 8<s X'ry$%B~@:s~1C3oN;/7j;M!;e#y?rBu~AH#%jiF~*8r4p@!?+X#}r(V笒7t 7>zz(˗BfqZ}3P.\_ >"kQeBek˦*y*cPzhˌPt^H@;Z|i=#4b V'vZ^GGD@NP֯%"O'G@}5nt'ׂ]Ԫ TINDۄZ$1홓LZ}{m'ҷYPJ08\I*W3D_pBDd&[l9DK#ӀgKd)MLɳ%9%-zJjby-$'W2E A>rzx|R$DZqr$MSB"#~ИbD9' )=3#d{R@a )ϑ2}RAe2%FHDZlJTs|ZII:,)P?\ :' LF7d{>?ܶc0rbⶕLQ j/iم\S X^$6  bn\8 D=Eo+|*Z.5f NCuPY&s%I-k+8XZ\H xT9u)E"#ݽ9O'@W"FFS]{ў^$d҃y^G'd/~A8,꼅zI6L$fs+Ӧ+؏_,_~nZ>?ԇƭ_Ã̭uJ\pu'{ pmY?K}w=nM7xfwIlvm9>Gӣ͌41y }IYCi8pH\K[b l?䠡U1|4YDqp 9A{8hbMIpPɧ11MB2G")R `IYLg 9W0 6:UDp$e a#BM89IX,8.́loe!;E/,naxh>ٗQ=`aTҶFesgTy'oܞQyG-)w[ߦ-`EIE7~Qןy ,jfVݚnuɇ Kz]lS~]|*b'oS@{VSECu}yG< ÈV;lԲ`{I]yN)HJޑ:j;]m@7_& 0T;m"[swS;{f݅E 4XMҺXcYS:X.AfRSD㟓@jɾ(_z|x[2F!Zi$>啕:fRWHO갨òR8/+r<JHIDէV>PZG)Rp;m%~}M Y2m-n&rR R\tr2q@N4p,~@s z1EζKa2]Nr - Zy?؝ /#/.z+Dvмċg;x^EYʩ*n#T__DfV~['5}e|7~dd:_Uk@P=0.ֳZg+VzTݥZ5ֻ?tmh:zROv>8l+^8I+j>$V2X%wJOYI2 V7YﺱF7 QGN3@=)Pƍ`GC6\#t.oibBQBO_h7wD5(& :RwC?:{;J Tt$xψ˞Șʥxpb8'dI{Jo7\Dؿ,] U\v!;uq@CL _?!0!0rlJHi>NtLpb8N)5i  ƚdW<3)IcFYR(eBb**sZ.}oEW"A>Z/,)jKWhuk9;wMZ"D*ÞRC"i7Gq @@ ڡޡrBԭ҉PX\T`TAHXfF)bLq.ph& Sm,;2BT8!aג{ɒ$ٹ~n{;BQ ^Ì'9E:铌OCv 7~Iՠ˺d MDRp^1o8W HTn;QiPG혂 ĵd_87ǻٹ5VHd̔b'~g[^pqm8\w@Nlap8D{w8={Wq1:Q`ZIO.#7k#! RΗ( Zu^ ."W4ƾw!,5>ɴ{^xΙ<&v[/=og=Z%9P⺜g}m G*w)DAΰ]({gf+u;w>R+BVy¹Q32lCNO k=0%SBYLHO5 ggr[zYq 1*b"5I&uP{::4apJ΂2*W+T7RK!.6b\~QYRu{}oI)G) dx.ͳ.Tv{iz$q.znfzuCِpw wF:'s0DZ"44&6dR>8J}'x( 2W"e` "QT]nS9mtãKw?o:\SǠ!Wbpny;&2A=,Kwk'\Sǐ=3h >8}7HLfiNFtQcj1$'NNh!z`J.;Zp}!2 $ZsAk6m? SFZ$ (hz9pl%`VKblw-47ǴunZd'OOg'h-@+OvFh%j*T2i d6Xrp<dvt _ϧ)CGC0SQ>i ۽F)+I`˃-'$,kBBwuy\tJ[&'t\3wGTC!iٟކ6}8!y \́(} R"ڐ}M햳YM(/,h, 3u?շQf)|N뻬6 Ojָ.P|[)ӕ-p4&:ɞwڒP iQE!D?GPLqo$a4rՠ.|-lQ GEXt76`1?hIqUoA1I1{a: C3 N򰘨N #0{xP:Ȏ3wP8"zd 􏯂qgm/l?|!cB?3RJE97p_|{cf93?3ݭsHfCRIo"mr0l)7-lΖ2@6nI3ۊx%aOf04CZ=lZx)ަBYrΒ$ۦ,i nQJG'qs[N/t?=ڳ%yW{1 LJhzR=xTGq$)ÁJZJ]_JMzx`P6V&_gb1?XRPS4$%$~zp6dTNiu/8"lB%CRaUC*PzPS[- VA6iGqkBƤߝJkxtp50 %`f.$y̎P0sȑ\R0_ =ʺȯ81W2a.@WC(;h0 :e|8(55  y&cS4뚶8lreb:hݎ@̻e3ݺ'n6m R*ނW~sz=*XzvwHʗN5V.N:A]!(䤐V-JZ|xu75{1(_,_"/RFx|wQ1i{212BVt<f1D(%^if>e.gt{'4\ڇO }L;f> B!SFysZ<rUV2?c/b%qM02.q"~/lQ^_/ ODq#eЂVBch5e1cp1Rʑ=!UbhL\+ݣ7oZuB7m(>ۢXD!Ku~]>,l0Q%VkPu zi/,Y~Gܫ^-CR(y@r>65PrV[ U4]K:L AX_S!0eAQ[/@T9嶩҅PQߠɺTDZ>*!0 B%DG=<5ĭaSá 4p7OQ\SKaw v.B~ޔ01pԯc{لK ɓ|=kSҠ;T]IAkg3sHĿuqrn ";4ccOLF'vg1'Gg'Hb+ SNk.*Ez8c*l{Ge!պ@ˑ)޽te/^$JKX"&q( ZDV# / qR3؜mٿ}fS+.ꂶpQgEb_]}n^R;#xg )+W2jhAWh{W k5U,XyzAbv~oF}ryju`<ϣ.aUh)Gxw <:P c2jNލZMw RFDF) >zͮ;,>,}4#b4 Jy II!l` շ6b\}kEm쵣8PwSf>|򦹧vVZ`i2,BE'/Bq96hNJ\e/ξ\9s?wf@9@PI1Bh*A ߃,ٞ!O׳vV`P8 ww|3?-tokMRyhjqJ-7@~|Wa!.z/vjM{Zj KUbI!1%E;©0\d&E|-D Σq͘ *COORUjOQf3{9NcOR/E_R D:%j{ʾY e!CRb- 70[Av=̎Rjw)hG !00X$LRR{eL{YWϮ+CMѝ)|}p$Cm:x Nd1[M"2NP30 H, T#J 0 Jb@"d6Zp suJSQnf'7>f`'|\kL+G3Y|J昄`1>:&X28jth% 8@M3aƙϼ;ϧ9{&F5>%Q:l̆[m_?ߊzD?э VK(7&) J!;eJ{3U&Z`|$՘A*wuB?iYҲ_R}j#hv-O SZhW3un!(c A7 ֦W;n c77qo&}_-I;1jSٟȯSŽMc0q'š}7υw";F,r"PL㞄OjU3Fwxc*Zi1H4a|jo?ޞ_\-jfI=wg5Of =- rB,:=I]H4 a [?z' Âf+ޕP g[ŚI,'vwLC]KcbN&eCճG抈#NDawAC#it-5J}kt^Nm6KWP'fS Qz(#䰦99ll߬Gez5 /x*;Ngi\׸wg1*˶|dguRsj i9"@= C Vν$+^˶?s$wA>QG q:,`d K:!R2ʭє(t8 YHڪ+[T3ĎVk%1СjVC4[Җ^gaV*D,i?zJUfFL0#DN[E-<]ZU試Q_CQ\ﮥ?*לzᆩ~G8uUz4W΅7+@0T9@ek j2f붭}2UtՒFI08taML1(9*Y !G`Q2Fj.̠1 oTrȄ@TJh0ljD#7kE# gֿ}YA;apsmG> /FT{0>,5k~cGTEeuC4q; @"*hu}T[Ҷ XR4ͯ y&eSѕo[ h/-W)6x1שr0hwBff1jA5P1t~hA3qtY$kdقq@Ci %W4T̡3} f5TGJJ6> yǠ8YF LQ짡֔Ga> GВgdQ$9R{c' +9V}8?L2'$"n8aeݍt" +apB塀gcާi,1a haLksDBqj {20ȅ4p¹Čb;V}QTvqu+}m7&IFx)#cH3F<1 XFx Wq͔U䒆^NZ%F^gh-,N,`8l8vj֦un>4?QC4VJ,N}2L7 60$?qGjsѣ*2japr/͜U0z8m*?TW~ ʍ1PZ?gR 0nakkuR}T(~PN}g;bD()Uf%R Ac_)q9DRe#Bp($kg@%c1mA>ZB($!>IUX$1}[4T3h-22 1gʚ#_a̋(GRlńcÖWaVpCRM#1ݨ&Y}T7P@9$ 2H$`8u  NH#$Q a?D\ u.&Vv)$rb$h$AuLj˒Lp"C{يeO25!Z!TB1dBFFfgs:j+ :K;r)}5r!O!"ߦZ)AJV7d N<Z`)8|"%S"Q9fT T ~̎6B/NuR9-W`mS(LցN6_ȷ댳0zl1Yj%LQd" 3񚅂ZYY 1+B{.Dk,h64(ApIq[#_.3D vǼDX@ .b"BP.f7no?6/^=C\8y&S~ L09nᢩM-XMLUӴṘ 1 !# P_x磅VSOԳXB ,IҾ=ߘ<,\78G\hrbgY˯TTZ PIي5C+Z/8v<شwuplSh[ѯ4GG*^&vTg]Z"tNڲ0sͶU69'H#JǩE8$845bKDZmsNĹ#mlEsGq3r=ٌ\dJa>+d\|E$1\v+x'_-BuO{`* *}.-B{-o^ҢtZEq_bHR'sF^@HaT$.&n1EC^,EJe7uD҆缉*l7vi= 2B0% $xcY,Px4LHmzyӆBA2+߬8I?gʫ X?=޸@uYp8&$Hzq_z7Ơ[fS6~ݵ29{'|yZx(z6jYH_P*z[ B0?e?a|txh.䘣5Ӓ|Ls"~Q(ﻑ7^**# @z[s55Wss?'ؚyB^i̗; T'>Jbf_72{VNZ}֔'z.fGpS*rJ:ʑ`90itTШm.0۠$`o"mFZH6^Sd1nlڶ#{uϢ;:KI.&)M?\mn h[p W WxL!වX/;iy@Fi$T8QM,#FCDc`ϳ0TЉzQ*,[ e7D(ِ$HIW6Y:^N7 h5b#J !@i$24XўZ9.HhL|)_Nj87;h4AF8"in\BbL58xkLщJt͔ߔ$N._#8%=B:EX_u9%Nc"_0(k(8TL{ebT|kq}MKTd77TGhjww_?Zmt9i< ڳ}cXmSr%+,# %²@ӹ@hJ^ ;΁9wo 9N#O;eJwζn E4GX O[VBgH8|ː11VXH1%x"D爃ki p&8a9I 50 @:2T0*`NK^ߡZJ>.2C,y*HbEhM!!_fɔ8ev!X=v Mt1LUwx0Tn Mnx=SWOyyf?DW+6vN}pC\m8 Nry<vx6(JLDpMQ^ s$wum2^_ susa.\<_u/>~ϑ&ϗCwH>\&q& tg(zɟ]ξ<1j~h[rm z0]Ȝ/ԊwZ{EC 5blHO47*De= >[%8RЩZ 뻳d7/ӝw/RE}'G&N(FYqKr/OĻ[E-jDac,nfp56I+Y'Z-gM2k9˻2ъ*qOWmA778`Abٟ,٧os$̉\hw\ Q_E;84,k$m4 o..LƂ˚2!@!„6okCp"xЄn Rdti6_@v76_\kP7You$# "߯AV\QXg?%/GYFP?UNM1;SӆVN %ZLYv9&bo-;ý*Qi"`Nf]`ĻI|cΐc'~XŒz%ByjZn'~&bCx |}z\ܡ,Xze!IeI Sun<6ej_';:6~7\Fg^2*;m񘧤[O G!t8-x'O՗oMZ]{fzwvV?ۺx()YQ4N0UL >|e)/R/^H4F %5Sfb2C9L_z`a'u-GI. ǴX OO [?5_slcmOS?grRyu):0r{ނ \= Tn0©I?Z85W '|)-Um(Mt\rFĀ#tFX6q$۞F kǥ^2|<+N3IO o?P $[8Aɜe!ޝ%n+P:dG)P1D!90ðQ/@)?D ω@ ?=dQ&mxNAJe7P*5J[ SHW.Y2%Ϸ]&v Etr1Df<6)ݲݦ\D7e*^d sF%cNN>X:9aJSMZ| ,m{a2Cq9"L[KL|72Aqˠ>@ _y6$X0q,}A|貀1b%BY)ifH&`n4"N!ֆ)LtȠlȠ::u |Zd}D&ζx$69O8AoEL .f 6zu{6dE΢Uw<8Y$Xo>-:%KMEԐ<I=N`[Ⱗn] :)&k pE@tk#~g*{ #k8VԔcHr蹕2mw*Lǐ?6ɔPj'_Cszq0S#$Yi'墽qhAj};9!OgiV*B#k@,ȫXq V>ogWaY,xGn;~C'g>Ur`ᮬCf&Lp1XH«l˚W{/팂ԫ-"y>T {OxW'/N:SB.PSWEjVvUdΚK=| c Z/ vuC̘0'c+0=!*WVJˍ{E/no)uDu` oOzh.TaHFWݥ?GHUkJn- 5ߥ&ehFEHZJ(&f?stSv2l=,D>CAλ弛8+ICfM~WG}l>_+A?ޒAD 31:0)/d)XKb#D"!,7BZu6J}\[C<)(%u: JyrB eE4p8Ie]PwG빵ba͍/檱 l;k =^,\OrE*w'buUcg|zq X8q28ZHCξ3Nb2y1op@ ljmyp5éoњؙZs݌*onCõZs ZV(Fc!JRwQ&@49@_X5+lc#ozȓj&*]X{uSaAdpkf @`5d6 vdܜ".[ l\¶QiD)kV؆aMWEޮ[>=971Vl9? ssbj <7f?}w2{ԣNo8VJ=yUy8}F/c=kRAl;|՚5zj-Eٝ Of]se7n8czs܌p~2LUQ’rg#} QVKuT(*:$9O2ySz,rBxFB#X@DzX4gF!כͦPŸ1hmu"/ I䵅;Npōmli!ZRj7o>ejetj. 轀+堏i+4P}hhۈܸs“ ]!?5!Bj{vB+*j0#,.2c );a`IX4V JE[cr #P0BFcVuλ F=]6^X\Y|f-ŕ_kԎ|]D5+5c_$/ΓeORJk>"C0d-rڊڑBѥؠ/I]p8#ݝRJ0I9s۴c5$^LI.L*& *YX<'z\\|rݵ@܋AqtE,V`F]ȑ6$额rVl]h%9ƭB {OV쎢#?@SQFc(櫽DwkaԕYYQ#Քm>䀶.U#&/]U 5 VG5UOVk4a'+E LmL9'U;T.tGH!Q]gSю#]+5[:28٤8]}gb"؜6vh p/5Oeծ!sgNq L/PQITQq"U*dn/_5qSЙ+pk2/{1A@(J#|Pv\mUbǶ25S( 5¡Q[T@ 8z{5ZõҠV;;ۉ5\Ѭl 7Etw @DҀtjDf =iͣpmyE.I.޼'Tw'buBu*ZX%TM.+:AXI7YZ)O9 @5R^%kN  O>9'ȼ!]BNI9Ƚ[Do`:}Fa$/(atr]3ILQ=&n(uӫm愴Q*3&\Y.Hn>^pu5' 8Z=Q5ZKz cQ\2tr e1tI >ՇZ?L ̰Ǐ(F%nԜmʩ5b4EH:ܳXDPLܧ&F: bld2E\DGϜ Z'_EB3Dk>j27A3t1"gY 1%TWǤMVSY Z㽉Q^XBVEqD)_ ؂!@:##=D>}NWO}(x \ kXIZ3R:XxX()2_|RnHIP!q0VסO mʡKلjС5١B͡ _͡JqD$Te!qPA(98XPUjĽq( EVGaBѭ]S|yAjX m\quWn;L׷~[P_/[b\'{ /{q6oDE#!Ч\U|gc\oMɨ<;7S4ؿ}u}\y q66 <;/^71vT׀>Wg*'g/L΂Lf7Jc^p/`p!/L: NǞwp6`l e=$p#6ƟZmW†=,nhO>PPhziN|Ҷ@rB#D??MO:?Y@h?o?aXuby։Y'g}\gW( s %ݓ'+ EJh,Fk-*]>o,xuknh]pV^@j5 V򈫏[ܝ?[)T ¨ZS]&[q #ꈨN?DLA: [g%z]%F Y)ۘl FDtk I.E3[)qGkc9uĒ! /݄s)iAgӅ>D⧳w)eFKwgˆ3OCڍUw }#N h*co{aHnb׳dI7/h[Ŀtk/Hn+p;do"4FImhjR}=-O%bxҖ1Ջ~-i9E.Rrm=uʭVVq /E層in GP0)8*_R4w$<҇īYջe<|",t6P:g\8d[`NL^B|y޲;wge7ȝS3B;e6c~ TD]Ld~x敲casTrdNW[rrb͸Wo;?\,ĻV\Z2F#Vt+A;ȴVftH<Y޵q$З2T? Cw5.~ڌ)!)߯zHI#g8C* [$3GwuUEe1ް4oACaR]a1A1ɠ#ф*/XcƵ7A9X{RK}xKUsИ}맕YEw 3*eG՜+/uͷ{#BKxc` z CzGPD5T QY 猹t k^#2G( ~/ rt3ƒi$ThKUT!qb3|ސdFVeV.+Bf@kvLoSp+B892XGuu7]L_8_ \X] ŻC&vZxZ5, 9#/5ތ~W]58zeZAcIGү?B(UqX)pd*E|\C}oam7pk=iC*$df*_0 /A}$ JSY!PhVԍ0 k[y˃M6x b$Z瘴 ›,N`0)q62C|) fY8p c|$qcԂYcGS&FC3%) - x0w';Fc47Qpm!QGSZH )I=Rf3p%8"&9ꐨR@'-zkdZ3Я #XIGI(b \QMҹۨ#rR N(%R.Hd%4ةPzs>h]}-Z7Mo,ަ|BQJ[r#Wʓ Q3P "LOtZ e>%ʖ}EӚ[rBVb@׆$@ ]h!8iQN_' {B1&Ekb5oW}G(EttY UYN2\ 'SwdG[fWEǫ=+-]ri9zRiofܝ]\\Nǫ J.m+a"eT͋ prrg-Frvl49KɃ34PY<F]]\\|Q<*zJ%AT:g<9v-E^zʋyT#J+@WxY,01AD X=-R+ؽc[4n2 1䌬Fc0##:Jh"ɬʕ0roB;@2w+؞$DXLu&T1{&]" ͂|rr1⍛Ta:&|=.+1XXYIh\n5K+zSi [^.WExZ ~2spl =vzF>bSIoFȽE7i'42-CP'7d4Rx$⍬" ,VafrdҥuI@+ ;J8ҁ3N\ص_Oa$`2WRAιr&o)G)!7s0qX#. Ӵz?[Z޹ }2hъ*%RMJJVY|o0H$5[DKYF~>M 2f]ǚ䂖ў1`]y!9^kN[jpm@>=\sc#g5{acpr_fsWr:>Eު\B;F" տŸ}D0̽{xRdL IlMB$rhØ Y{n?i0خsq0O&Pц }x5lp ,+{\w ^/H1m,EK:VK± l 2mQKn5u)Y:]~㺁 ɶㄽ>Wf1=EgѻU%?|l_\=~ާG{:9Gu-mAZ Nj:'QՎ}!(XL tSB2FhP FC *Ihr 3wt}c!2imXFtY3)+3ңV&g3&# "D=F-+>:ID4 UI$hR܁t'hEv!t>5QH(!\&TVp'0F G{r` D4֨j\7d$uzף:Rle$! 2 $HGz,d6,\CszCB8,.YE8@dRl IzQڭِT(I%jQ#0!zNjnX4~"\(i^rB ]6saGZ{M 8E#"VZCX!FFrcHI/ EfcF^B$8I*.&RI y'лH:B()!|2p|HQk@cI m\2gR%6InH j53)(O%& KA#DyzQ#gv!Hz!& Qs2!@RoY_d.0"R(tV涾ƖqnqȾ}\jԍȂF!?aiZ @kLZJ!2掉Vd/ ~X%ړfC M.tY.ԉ{Va)SNN+YFE GH$cVF1k IA#ܬrnD#%whB'cH! xgԲlY?Ubw؍KQ626ו>hkΛK%?<{1E$]` VeN*+вjjpk]zg Ta@qZsfL@ a"u0_NY-sA^.3k`+8ggZ y wS 4{AgrȯZct.fxqB"=bQM7f=pYKZg-?V" 635߭zuVo\BҜ_Wgy:2dsbVi9 u˨stb*n* Xt3!ywubփ26O+= 5A4rrYjk\_܎}wn'`۷O_y;ݹ$ܨܼsnX=[ @lz4gd6%˶G۵'6j|_3bToky %s`Z7=SgO͞yh.͹lsy8KF2qO#^V f-SI"eA忲 9R'hd_.~W=*=*=*=jJa1UehTabބȕc U΂2 h4I$O5_ߟ ^,:12NX'u@1E꒴H, i$YޤYwq>Y%ZBg'p+\C,9X&oFEX4)M1F@4^lM-݁h!{;m $a97DӾ3ڊC#W6Wٵ]$?mj.׸5yzq H<`/<>F1#>77ZF9Hn|~=vܣ> jT^Y?y kH mww_ӻϿe]a=Ͼ8&teΑ=Мۇzۯ>/q*{(0ype9n?Kl[_WR}m[1C1T(;T)+"Гw^: _I )MCi%v$lI9sh`nmgf("_"ylpi_7KN˻mQ`4w^}h4>3d@d+0C/Puіΐ3ZaіOhIz47FcmXiIuR4+BYjN2ҰBBJ(呡QnFR]k%z@F- TD`**rB2ά7$!:VI?K OO֓̑!ٵ}:ޯ8&W}SoQ"J^r]>.*~Z~N&DVdB3PZi+ g*ʩ 32^K^X!=j+%՛+@G+A.'W]AJh]")CII.e+ƚL, ytLQxCu ʺ`(kxmŘ sXLЊ]Ӄn_h*~e?\] %JV`N<"?y|WA709$tX  ogK&ْIû,F\D1M6gK;aii2q6"Sfg%?]_XabSM9ǟ;lQcKxw*BL4%PW(rz82fAЦW"$!?9(adjvK:ҕ )b!D Y$9^oO8PbԞw<v)*{ bюsSf hMk]zyZ  KxJQڗh݋S c/q ʄچ4N{x2k[Ǯ½͋ A4/h^E^(?[YJQkUp9 K+RZRza}];EYUs_!"RHIе9aCsy-xa$Oj_ZY扷󂖺l(JH' shHO6PPCOí`OZ܂QE¨``[T_\KƂ&9&%ȸSoi4@(7^/)塛2(FO> Z0Zsg;='A lVȸYa]BlPv:#fk4PZ`5^E$CZـ˥n(;9Rh%zK7PҰEqVoLD>g4Oܖഠn?7wܴ˛jј7?6 t>tw<~nt'ti=]KXtxW~Qb}Np@_ݬE% y"%SBRok7u/[&Q$v'];Edo9ՊlSB"(YOlo^$8njfSWsy?=́nL4b%!w&9U&AԪ vpDIД <(Mz y= GK%Slfd=$S% gIyQw -F)%6& ;n2ZIMRc}vgB`:/,IPJOv[%C5-Y~vTSR#;LIuN:FYV_\glN5Mv*ra' 7eTC9Cݹsߑ>{x*K#CmH ,jݡEv.1vDy %7nmH ,R;-*@Ab#:HnG1ݲ'ڐ.2E#ecˠY8H͠?t<2< $T34H\)fK ^0Rs#Dru<+OI1G4NI<7LDIiZ{uPZ'sN[ s·B3.b7hW V\1̆ZHw܍ej%5YL8"x%sS㨲Ԗ5\rSif5]CY-zqϷVȯN#RrK(jnv E*xe4Y@Q(}Bq6| Rr8.No _x= )`n: !GM 4,e҂TN*a+j%1Rs[bH5H;ӨQ7:% T~X (Rpqp]Tq2qFUb (- g*d9Ղ /U1_xMʂFYZ6zTVX8RVr,Q  A5bQ;CN=NF:ե*UzS/i-HB)¸|Fɦr:\$13#B yr_VȊ3m{Cs%uw3jG$.6HQTVKg)@[^ JR0V)A5l- C":hYTTS^+(wAQjgxpʂ'u ~Oo&!9=jDK|{9Y&n0N /k [S__ias0^ج~w|q~0_WF"`t.ѹd|Xi jET.TJ^HWbir%jFo GLGsΆ;?m`|]cgЦ 2S~s6.Нc9=A`9`n9 !F3OآBg, 냶BR#S`g老[(Hňrm|qig~R~[Mt,؏<5eqOe[54?Fz`vS??gV|>·x?q~_m(brH~/?!w}'lLdp<|{pHI v:[xd<q1Rm#C@uk+Llx}12=%n1nPACZ C"϶4kvLjIk.D+﮿9{9)"I<,`lu'JhvJOEHPۆ ns :(?V6t2G׏Q}IT]5#z ZAƎ=&*iٷ9M@3؜JPΟr sv9B0-ͩV@SsOSm! H!TSd'MH?b\ bj# V}6ʷБeڐ.92eޒli7jю21w$iy[vp,V !/\Ddi=-E\ bD')혻ܲug8Fj6$䅋hsY#ʼnh~/'r% ȏt0 s0;Ch!rJno'7MQkoښF{~OSrU)VmNsOb bsM\^|uh(kY===-k# >(Qj]hiZ R̪~+Ã<٥xHTꡒg?ԕx 1CeM >WY]Θô jkXurC'Wf+ǂ)Jy˳ ,Y9wx`d}ST_Rq/nw\i)IK9%ST_RN`tZ*2TWJ΅L/{3>\)L,I2NOUw-`'nߋE`b9ݤPxfGDYoޔ3$jD|Oڀ*_a9+5XM DWq>Ǥ)RM]:vI`]E)L1zMcnH %^4 ؁jۮv5nS(2kD:mnvʶO͏7ŐBWg\&?%1\!̉$-vLlq?V/L'ꐘv%r65eHŗjYdS˼[&) 3&eMdB#[3ϸ6 sM׍Q@oI34GsZ6iSJPh I B3JÏPNpNMN-!}~Ӂm\uͯ r>/ESbmz ?0WE+Wakmn "RԀmxaw!*XB-+$ ֈ+&L 1Q T#!)A0oX-Pr*}HQ C:bL-c(oŦ|׮f7Q*9"qi9*UB8: 7ֈ7>`XSBi`F|Qba $*Fq癮|Jح§2Nܫv50LD'gIJ`ziEa{?T9V{Ac54Ǫj5wrs'ZZr$R$@Ea%KV $\/%k A$Ǽr5+v;=n9\ $Y&4Ќ m)UɰrO8/h^bVR<(h.TZ/ )é7z5nr}p)/ 攙""h…gj7J̌)1DqEߠJS (w9X#M`{- - O7FK'hk/n..vk#zЈ*hVL š"Z۶.  -uY{-]6buwۋ0F #շ-STu[ i؝|q|ZH)+8PGXAFRouR:ݥS l1,-]=71ՔYJGo,25i1b܊Tc~\@eJui@S:`9}{R3xY{<8ZEf(3ݏn!󴴤T#RK3}ʒaP,QwSI,Ruj•yER5,)GyT'񩩎a!J=8H)t(`IE%J!J=hr2BzT. C(DC QꡢJ Q!gztl( A}!JMHzv((@2ɄFTTcXX=S_1E3SX%m}3h76_z?K] t8 :^OqБRVA鹢8YƢKspX;ͅ~:Үi(*f kcv,~w۔5\n\gCo %d&AOyA}SVh @m6A{4Ӡ0 D\1ED4ɉ;6C$<>dr1ذ._Tbr1"|bi'mKHЋM7靛mT3$-q̀;4VQI0=:LcOS}զN>W "]P2BK)ҒjneZx2.ɺZzC쬥Rn򴔛dKST_R >r/}T_Rh묥CҼkV2%<--ծ0g-ei)UOY/՗T+\9{!j)yZi՗T+.Ιak\'TKj\K>ω)kẹ̀1V6>E@fI)kLoD g11шSea^C 2Íe9EfUOnlЕ=sxi-KouaJ&{LMG—&,0nݘT'ړdPm26cl}š~#Goyk!M =f.~*eS-0UDidʹT:v~m]cfO}9oV'wZ^L&DZRB{KhIz`%eϿZ{@_MK͕v &0Af2A) CrT).: !O%6m/7Ê-xS؇Wen>$#d VJ:69ƺl\ 0jQϖ/kqH.绫OoAXtۮv[]H6ʰ۰ϖp~E~{%z}Jr"F Nhg.Q\:0qy߫^y~=f9|uZ!} qx-?/F >;`"ptd[q^sDA a:%pUʺ0ytb'I6QIOvyv0گ%o`?lɄPG.4hC\Ų֭F( ٷ_+:wH( -#*ۧ3¢tT߱{xo?Moniw&ao[ 7!??(var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004102557015134075555017713 0ustar rootrootJan 21 06:35:09 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 06:35:09 crc restorecon[4716]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.329821 4913 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336037 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336067 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336077 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336086 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336094 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336103 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336111 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336120 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336128 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336136 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336144 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336152 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336160 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336168 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336176 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336184 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336191 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336199 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336222 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336234 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336244 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336252 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336260 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336267 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336275 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336283 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336291 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336298 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336306 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336313 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336321 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336329 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336337 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336345 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336353 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336361 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336369 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336377 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336384 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336392 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336403 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336414 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336423 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336431 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336438 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336446 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336454 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336464 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336471 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336479 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336486 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336494 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336503 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336513 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336522 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336530 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336538 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336549 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336557 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336565 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336572 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336580 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336612 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336621 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336629 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336640 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336651 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336659 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336668 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336677 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336685 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336852 4913 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336870 4913 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336887 4913 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336899 4913 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336911 4913 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336921 4913 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336933 4913 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336946 4913 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336956 4913 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336965 4913 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336975 4913 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336984 4913 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336994 4913 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337003 4913 flags.go:64] FLAG: --cgroup-root="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337012 4913 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337021 4913 flags.go:64] FLAG: --client-ca-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337029 4913 flags.go:64] FLAG: --cloud-config="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337038 4913 flags.go:64] FLAG: --cloud-provider="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337048 4913 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337057 4913 flags.go:64] FLAG: --cluster-domain="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337066 4913 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337075 4913 flags.go:64] FLAG: --config-dir="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337084 4913 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337094 4913 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337106 4913 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337116 4913 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337125 4913 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337134 4913 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337143 4913 flags.go:64] FLAG: --contention-profiling="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337152 4913 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337161 4913 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337171 4913 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337179 4913 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337191 4913 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337200 4913 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337209 4913 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337217 4913 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337226 4913 flags.go:64] FLAG: --enable-server="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337239 4913 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337254 4913 flags.go:64] FLAG: --event-burst="100" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337265 4913 flags.go:64] FLAG: --event-qps="50" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337277 4913 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337288 4913 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337301 4913 flags.go:64] FLAG: --eviction-hard="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337316 4913 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337327 4913 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337339 4913 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337351 4913 flags.go:64] FLAG: --eviction-soft="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337361 4913 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337370 4913 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337385 4913 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337394 4913 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337403 4913 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337412 4913 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337422 4913 flags.go:64] FLAG: --feature-gates="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337444 4913 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337453 4913 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337462 4913 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337471 4913 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337481 4913 flags.go:64] FLAG: --healthz-port="10248" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337490 4913 flags.go:64] FLAG: --help="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337499 4913 flags.go:64] FLAG: --hostname-override="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337507 4913 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337516 4913 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337525 4913 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337534 4913 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337542 4913 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337551 4913 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337560 4913 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337569 4913 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337579 4913 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337616 4913 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337626 4913 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337635 4913 flags.go:64] FLAG: --kube-reserved="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337646 4913 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337655 4913 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337665 4913 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337674 4913 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337683 4913 flags.go:64] FLAG: --lock-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337693 4913 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337703 4913 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337713 4913 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337726 4913 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337735 4913 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337744 4913 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337753 4913 flags.go:64] FLAG: --logging-format="text" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337787 4913 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337798 4913 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337807 4913 flags.go:64] FLAG: --manifest-url="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337816 4913 flags.go:64] FLAG: --manifest-url-header="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337829 4913 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337838 4913 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337849 4913 flags.go:64] FLAG: --max-pods="110" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337858 4913 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337868 4913 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337877 4913 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337886 4913 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337896 4913 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337905 4913 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337915 4913 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337935 4913 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337944 4913 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337954 4913 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337963 4913 flags.go:64] FLAG: --pod-cidr="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337972 4913 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337985 4913 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337994 4913 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338004 4913 flags.go:64] FLAG: --pods-per-core="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338013 4913 flags.go:64] FLAG: --port="10250" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338022 4913 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338031 4913 flags.go:64] FLAG: --provider-id="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338040 4913 flags.go:64] FLAG: --qos-reserved="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338049 4913 flags.go:64] FLAG: --read-only-port="10255" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338058 4913 flags.go:64] FLAG: --register-node="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338090 4913 flags.go:64] FLAG: --register-schedulable="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338101 4913 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338117 4913 flags.go:64] FLAG: --registry-burst="10" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338126 4913 flags.go:64] FLAG: --registry-qps="5" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338135 4913 flags.go:64] FLAG: --reserved-cpus="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338143 4913 flags.go:64] FLAG: --reserved-memory="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338154 4913 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338164 4913 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338173 4913 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338181 4913 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338190 4913 flags.go:64] FLAG: --runonce="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338199 4913 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338208 4913 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338217 4913 flags.go:64] FLAG: --seccomp-default="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338226 4913 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338235 4913 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338245 4913 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338255 4913 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338265 4913 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338273 4913 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338282 4913 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338291 4913 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338300 4913 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338310 4913 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338319 4913 flags.go:64] FLAG: --system-cgroups="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338327 4913 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338341 4913 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338350 4913 flags.go:64] FLAG: --tls-cert-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338359 4913 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338369 4913 flags.go:64] FLAG: --tls-min-version="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338377 4913 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338386 4913 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338397 4913 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338406 4913 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338415 4913 flags.go:64] FLAG: --v="2" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338432 4913 flags.go:64] FLAG: --version="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338443 4913 flags.go:64] FLAG: --vmodule="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338456 4913 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338466 4913 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338773 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338788 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338797 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338805 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338814 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338823 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338834 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338844 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338854 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338863 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338872 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338881 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338889 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338896 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338904 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338914 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338922 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338930 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338938 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338945 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338953 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338961 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338968 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338976 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338983 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338992 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339000 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339008 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339015 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339023 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339031 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339039 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339046 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339054 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339063 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339070 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339079 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339086 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339094 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339104 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339122 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339131 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339139 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339147 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339156 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339166 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339176 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339184 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339193 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339201 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339209 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339217 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339225 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339243 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339251 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339258 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339266 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339276 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339286 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339295 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339303 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339311 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339319 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339327 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339335 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339342 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339350 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339358 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339365 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339372 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339381 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.339657 4913 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.350448 4913 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.350511 4913 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350669 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350684 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350693 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350701 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350709 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350718 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350725 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350734 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350742 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350750 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350757 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350766 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350775 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350782 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350790 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350798 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350806 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350813 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350821 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350829 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350837 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350881 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350889 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350897 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350904 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350912 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350920 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350928 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350935 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350943 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350951 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350959 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350967 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350977 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350988 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350998 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351012 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351022 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351031 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351039 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351049 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351057 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351065 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351073 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351081 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351089 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351096 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351104 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351112 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351119 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351129 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351139 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351149 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351157 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351166 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351174 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351181 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351189 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351197 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351205 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351215 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351225 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351233 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351242 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351251 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351259 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351267 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351275 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351283 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351291 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351300 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.351313 4913 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351539 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351553 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351622 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351637 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351647 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351656 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351668 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351676 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351684 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351692 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351699 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351707 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351715 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351723 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351731 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351738 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351746 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351754 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351762 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351769 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351777 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351785 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351792 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351802 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351813 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351821 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351828 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351837 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351845 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351853 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351861 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351868 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351876 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351884 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351893 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351901 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351909 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351916 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351924 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351932 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351940 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351948 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351955 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351963 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351971 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351978 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351986 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351994 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352002 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352009 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352017 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352024 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352032 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352040 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352047 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352055 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352062 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352070 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352078 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352085 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352093 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352103 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352115 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352124 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352134 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352144 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352154 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352163 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352173 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352184 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352193 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.352206 4913 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.352444 4913 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.356481 4913 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.356674 4913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.357529 4913 server.go:997] "Starting client certificate rotation" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.357570 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.357894 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-11 16:41:31.412218781 +0000 UTC Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.358053 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.365458 4913 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.367378 4913 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.369348 4913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.379144 4913 log.go:25] "Validated CRI v1 runtime API" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.394492 4913 log.go:25] "Validated CRI v1 image API" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.396828 4913 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.400207 4913 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-06-30-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.400282 4913 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.428559 4913 manager.go:217] Machine: {Timestamp:2026-01-21 06:35:10.426708337 +0000 UTC m=+0.223068020 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7037ee30-9526-47b8-97e2-90db93aaec61 BootID:dc2e078c-6a92-4a2e-a56c-2176218bd01c Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0e:ab:27 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0e:ab:27 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1c:a7:8e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:48:e2:5d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:78:c3:e7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:75:0a:e1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:92:18:fd:1e:14 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:c5:c3:55:a6:21 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.428963 4913 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.429181 4913 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430175 4913 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430401 4913 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430453 4913 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430810 4913 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430822 4913 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431042 4913 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431085 4913 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431321 4913 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431890 4913 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432681 4913 kubelet.go:418] "Attempting to sync node with API server" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432704 4913 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432735 4913 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432752 4913 kubelet.go:324] "Adding apiserver pod source" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432775 4913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.435136 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.435209 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.435212 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.435328 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.436164 4913 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.436816 4913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.438265 4913 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439090 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439139 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439158 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439175 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439203 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439278 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439300 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439329 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439349 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439371 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439421 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439440 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439970 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.440806 4913 server.go:1280] "Started kubelet" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.441974 4913 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.441949 4913 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.442320 4913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 06:35:10 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.444536 4913 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445039 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445104 4913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445171 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:30:24.101466077 +0000 UTC Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445304 4913 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.448615 4913 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445445 4913 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.446077 4913 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.447934 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.448958 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.449277 4913 factory.go:55] Registering systemd factory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.449313 4913 factory.go:221] Registration of the systemd container factory successfully Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.447856 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.447461 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cab7dcc8c1eba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:35:10.44075897 +0000 UTC m=+0.237118683,LastTimestamp:2026-01-21 06:35:10.44075897 +0000 UTC m=+0.237118683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.454928 4913 server.go:460] "Adding debug handlers to kubelet server" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.455767 4913 factory.go:153] Registering CRI-O factory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.455964 4913 factory.go:221] Registration of the crio container factory successfully Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.456254 4913 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.456414 4913 factory.go:103] Registering Raw factory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.456547 4913 manager.go:1196] Started watching for new ooms in manager Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.458786 4913 manager.go:319] Starting recovery of all containers Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465132 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465201 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465218 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465231 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465245 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465258 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465271 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465283 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465300 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465313 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465331 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465344 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465359 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465373 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465388 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465400 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465413 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465425 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465438 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465452 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465470 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465485 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465499 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465516 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465534 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465549 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465662 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465687 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465706 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465724 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465744 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465762 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465780 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465796 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465814 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465831 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465849 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465867 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465884 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465901 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465918 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465936 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465956 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465977 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465995 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466011 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466030 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466048 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466064 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466081 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466100 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466116 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466171 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466190 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466209 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466229 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466246 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466263 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466279 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466295 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466311 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466329 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466351 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466368 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466384 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466401 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466418 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467276 4913 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467312 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467334 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467353 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467370 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467387 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467408 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467424 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467442 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467461 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467481 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467498 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467514 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467530 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467547 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467563 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467578 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467616 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467632 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467646 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467661 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467676 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467691 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467710 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467724 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467741 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467758 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467774 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467790 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467804 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467820 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467836 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467852 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467867 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467882 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467900 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467917 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467935 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467958 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467976 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467993 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.468012 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.468048 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471221 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471301 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471353 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471378 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471421 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471443 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471473 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471491 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471519 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471537 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471555 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471581 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471622 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471652 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471667 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471712 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471736 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471752 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471779 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471797 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471823 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471853 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471869 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471887 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471915 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471932 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471959 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471974 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471992 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472022 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472041 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472067 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472090 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472107 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472142 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472164 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472190 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472210 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472228 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472250 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472269 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472290 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472309 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472338 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472377 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472403 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472422 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472446 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472463 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472485 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472502 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472521 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472554 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472571 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472650 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472667 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472686 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472708 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472726 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472748 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472765 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472784 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472811 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472828 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472848 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472864 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472879 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472897 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472916 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472935 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472953 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472972 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472994 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473012 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473035 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473050 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473067 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473087 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473104 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473119 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473141 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473162 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473188 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473230 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473265 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473293 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473313 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473351 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473374 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473391 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473415 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473433 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473457 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473476 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473494 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473516 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473535 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473557 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473574 4913 reconstruct.go:97] "Volume reconstruction finished" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473586 4913 reconciler.go:26] "Reconciler: start to sync state" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.495998 4913 manager.go:324] Recovery completed Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.508202 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.509822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.509892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.509905 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.510808 4913 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.510848 4913 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.510887 4913 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.520905 4913 policy_none.go:49] "None policy: Start" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.521658 4913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.522479 4913 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.522520 4913 state_mem.go:35] "Initializing new in-memory state store" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.525025 4913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.525083 4913 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.525110 4913 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.525167 4913 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.526349 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.526428 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.551651 4913 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590073 4913 manager.go:334] "Starting Device Plugin manager" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590187 4913 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590246 4913 server.go:79] "Starting device plugin registration server" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590946 4913 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590979 4913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.591777 4913 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.591986 4913 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.592013 4913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.601866 4913 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.626515 4913 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.626780 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629477 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629674 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630367 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630516 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630954 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.631059 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.631253 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.631334 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632023 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632111 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632411 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632643 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632671 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632645 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632786 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.633026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.633038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634921 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635138 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635200 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635651 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636611 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636668 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636929 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636975 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.637066 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.637113 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.637134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.638814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.638853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.638864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.652628 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.675920 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.675979 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676012 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676035 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676058 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676078 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676101 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676163 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676259 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676329 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676354 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676378 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676445 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676502 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.691439 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693504 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693584 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693704 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.694405 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.777761 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.777907 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.777957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778006 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778049 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778014 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778094 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778031 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778189 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778225 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778255 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778290 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778388 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778419 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778479 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778486 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778537 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778573 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778649 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778676 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778711 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778744 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778749 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778818 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778575 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778798 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778438 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778333 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778911 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778775 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.895014 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898091 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898115 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898159 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.898743 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.986906 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.015440 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.018690 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.021449 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf WatchSource:0}: Error finding container 6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf: Status 404 returned error can't find the container with id 6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.038293 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104 WatchSource:0}: Error finding container 9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104: Status 404 returned error can't find the container with id 9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.041155 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.047485 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.054320 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.068347 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74 WatchSource:0}: Error finding container 35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74: Status 404 returned error can't find the container with id 35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74 Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.069800 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9 WatchSource:0}: Error finding container 4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9: Status 404 returned error can't find the container with id 4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9 Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.238331 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.238440 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.299076 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300340 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300378 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.300852 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.442801 4913 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.448725 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:01:09.525875883 +0000 UTC Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.471038 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.471530 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.532978 4913 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.533110 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.533239 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.533392 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.534985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535456 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535509 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537354 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537446 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537478 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"440822bbb71d012ca630012e6cfd14d6bfd90d81c36a5252ad56699e809e69d0"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537681 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.538474 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.538508 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.538520 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539681 4913 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539757 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539778 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539960 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.540156 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543427 4913 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543459 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543482 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543549 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.544381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.544405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.544417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.626847 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.626948 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.632722 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.632778 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.855548 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.101642 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102886 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102917 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102951 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:12 crc kubenswrapper[4913]: E0121 06:35:12.103284 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.449507 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:14:42.241056962 +0000 UTC Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551686 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551742 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551761 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551772 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.553904 4913 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3" exitCode=0 Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.553967 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554094 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554852 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.556713 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.558621 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.558683 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.559440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.559461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.559470 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562583 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562659 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562678 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562801 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.564045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.564077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.564091 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567012 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567039 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567056 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567120 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.568199 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.568238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.568251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.450003 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:00:14.446510973 +0000 UTC Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.576117 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5"} Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.576240 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.577528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.577573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.577613 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580271 4913 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b" exitCode=0 Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580349 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b"} Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580614 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580806 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.581736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.582265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.582328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.585998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.586075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.586089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.703923 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705896 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.005370 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.014146 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.450245 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:09:04.934277071 +0000 UTC Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588805 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4"} Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588889 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588897 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa"} Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588935 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2"} Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588953 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588977 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590549 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590673 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.630758 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.630967 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.632817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.632864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.632882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.451486 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:15:00.309401415 +0000 UTC Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599550 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599692 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599719 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599550 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19"} Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599808 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400"} Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601182 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601241 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601262 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601409 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.451952 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:34:07.69266326 +0000 UTC Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.561949 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.602264 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.602387 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.602451 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.947209 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.081428 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.081709 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.081776 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.083490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.083554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.083580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.452327 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:58:48.009605249 +0000 UTC Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.577921 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.605176 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.605299 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.605419 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.606669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.606713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.606729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.607452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.607543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.607567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.333450 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.453457 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:31:55.384987993 +0000 UTC Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.486876 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.487072 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.487208 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.488978 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.489043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.489067 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.609222 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.611068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.611130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.611154 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:19 crc kubenswrapper[4913]: I0121 06:35:19.454383 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:34:32.13335298 +0000 UTC Jan 21 06:35:19 crc kubenswrapper[4913]: I0121 06:35:19.563005 4913 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 06:35:19 crc kubenswrapper[4913]: I0121 06:35:19.563113 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.385701 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.386020 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.388036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.388119 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.388139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.454717 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:54:26.366544738 +0000 UTC Jan 21 06:35:20 crc kubenswrapper[4913]: E0121 06:35:20.602011 4913 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 06:35:21 crc kubenswrapper[4913]: I0121 06:35:21.455658 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:14:13.620693737 +0000 UTC Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.443945 4913 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.456420 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:41:02.405060094 +0000 UTC Jan 21 06:35:22 crc kubenswrapper[4913]: E0121 06:35:22.559079 4913 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.924553 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.924761 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.926061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.926089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.926097 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.110637 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.110782 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.132575 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.132662 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.457815 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:26:21.855650178 +0000 UTC Jan 21 06:35:24 crc kubenswrapper[4913]: I0121 06:35:24.457985 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:32:42.274699282 +0000 UTC Jan 21 06:35:25 crc kubenswrapper[4913]: I0121 06:35:25.458792 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:15:20.265502574 +0000 UTC Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.460007 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:50:26.683927078 +0000 UTC Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.561873 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.573277 4913 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.592125 4913 csr.go:261] certificate signing request csr-h7hvp is approved, waiting to be issued Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.598093 4913 csr.go:257] certificate signing request csr-h7hvp is issued Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.090841 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.091016 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.092188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.092223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.092236 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.096394 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.460338 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:54:18.406324311 +0000 UTC Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.599681 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 06:30:26 +0000 UTC, rotation deadline is 2026-11-10 07:06:07.287017675 +0000 UTC Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.599747 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7032h30m39.687274964s for next certificate rotation Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.634731 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.635968 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.636035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.636054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.096669 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.098902 4913 trace.go:236] Trace[447619719]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:14.108) (total time: 13989ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[447619719]: ---"Objects listed" error: 13989ms (06:35:28.098) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[447619719]: [13.989910874s] [13.989910874s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.098939 4913 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.101133 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.101866 4913 trace.go:236] Trace[681385259]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:13.357) (total time: 14744ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[681385259]: ---"Objects listed" error: 14744ms (06:35:28.101) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[681385259]: [14.744047881s] [14.744047881s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.101897 4913 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.102335 4913 trace.go:236] Trace[577624512]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:13.742) (total time: 14360ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[577624512]: ---"Objects listed" error: 14359ms (06:35:28.102) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[577624512]: [14.36002141s] [14.36002141s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.102359 4913 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.103462 4913 trace.go:236] Trace[694063199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:13.978) (total time: 14122ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[694063199]: ---"Objects listed" error: 14122ms (06:35:28.101) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[694063199]: [14.122284153s] [14.122284153s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.103494 4913 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.103682 4913 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.338302 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.431323 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.441720 4913 apiserver.go:52] "Watching apiserver" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.443696 4913 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444087 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sqswg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-jpn7w","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-multus/multus-gn6lz"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444431 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444544 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444654 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.444839 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444944 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445254 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.445400 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444951 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445553 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.445637 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445655 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445801 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.446894 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450426 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450524 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450614 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450744 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450972 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451093 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451109 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451249 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451257 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451269 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451313 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451368 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451444 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451462 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451476 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451582 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451614 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451661 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452014 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452799 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452865 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452977 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.454988 4913 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.460563 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:04:46.443890349 +0000 UTC Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.472634 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.486428 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.501096 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505821 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505855 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505873 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505891 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505908 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505924 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505939 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505954 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505969 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505985 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506001 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506017 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506032 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506048 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506109 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506132 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506158 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506175 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506196 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506375 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506401 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506214 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506420 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506432 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506462 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506793 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506582 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506663 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506761 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506785 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507132 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507225 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506817 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507287 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507309 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507325 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507354 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507371 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507389 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507404 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507421 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507437 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507453 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507468 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507484 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507499 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507516 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507533 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507549 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507564 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507580 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507615 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507634 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507656 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507679 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507702 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507720 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507737 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507758 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507783 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507810 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507832 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507851 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507871 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507889 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507921 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507938 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507953 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507968 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507983 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507999 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508014 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508032 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508046 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508062 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508081 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508098 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508130 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508144 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508160 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508175 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508189 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508207 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508222 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508254 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508271 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508287 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508302 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508323 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508338 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508353 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508370 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508387 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508401 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508418 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508433 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508449 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508464 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508481 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508496 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508511 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508526 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508543 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508558 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508572 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508605 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508622 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508638 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508652 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508667 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508681 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508697 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508729 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508758 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508777 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508792 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508807 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508822 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508840 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508857 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508872 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508886 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508901 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508916 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508932 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508946 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508961 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508977 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508993 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509011 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509028 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509045 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509062 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509077 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509094 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509111 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509127 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509156 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509173 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509189 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509205 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509220 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509236 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509252 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509268 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509284 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509300 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509349 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509366 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509382 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509398 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509414 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509431 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509446 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509462 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509477 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509494 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509514 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509531 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509547 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509563 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509578 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509608 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509626 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509643 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509659 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509675 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509690 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509706 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509722 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509738 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509835 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509853 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509868 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509883 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509934 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509951 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509968 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509984 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510000 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510018 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510035 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510053 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510069 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510086 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510125 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510142 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510157 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510174 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510190 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510209 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510226 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510243 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510261 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510277 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510295 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510332 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510349 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510365 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510382 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510399 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510441 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510464 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510491 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cnibin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510507 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-kubelet\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510525 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-daemon-config\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510546 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510563 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-multus\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510582 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6s4k\" (UniqueName: \"kubernetes.io/projected/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-kube-api-access-c6s4k\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510794 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510822 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510849 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510875 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/941d5e91-9bf3-44dc-be69-629cb2516e7c-rootfs\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510893 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-multus-certs\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510910 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/941d5e91-9bf3-44dc-be69-629cb2516e7c-proxy-tls\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510926 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-hosts-file\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510943 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-hostroot\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510959 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-conf-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510974 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510992 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511012 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511033 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzhr\" (UniqueName: \"kubernetes.io/projected/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-kube-api-access-jpzhr\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511048 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511067 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-os-release\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511081 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-bin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511095 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-etc-kubernetes\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511114 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511131 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-k8s-cni-cncf-io\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511147 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-netns\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511163 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/941d5e91-9bf3-44dc-be69-629cb2516e7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511181 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511210 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511229 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511247 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511262 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cni-binary-copy\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511278 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-socket-dir-parent\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlg6n\" (UniqueName: \"kubernetes.io/projected/941d5e91-9bf3-44dc-be69-629cb2516e7c-kube-api-access-rlg6n\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511319 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-system-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511380 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511392 4913 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511402 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511413 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511422 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511432 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511441 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511451 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511461 4913 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511474 4913 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511488 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511500 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511513 4913 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511526 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.512833 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.513425 4913 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.513937 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514155 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514206 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514238 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514239 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514435 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514457 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514509 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514415 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514752 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514927 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515314 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515303 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515402 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515418 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515467 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515536 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515647 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515955 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516223 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516505 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516897 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516977 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517213 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517470 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517660 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517768 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517898 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517906 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518122 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518156 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518315 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518349 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518521 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518725 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518780 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518910 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.519047 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.520657 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526279 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.520854 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.520998 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.521156 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.521376 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.522260 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.522714 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525115 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525302 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525325 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525536 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525755 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525917 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525959 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525987 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526002 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526447 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.526635 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526645 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.526721 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.026697107 +0000 UTC m=+18.823056770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526903 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527012 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527177 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527573 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527699 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527961 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528267 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528543 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528626 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528933 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529195 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529439 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529688 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529761 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.530108 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.530174 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.030154879 +0000 UTC m=+18.826514622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.530276 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529938 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531190 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531565 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531607 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531717 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531918 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531983 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532140 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532178 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532263 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527675 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532398 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532581 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532716 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532787 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532823 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532884 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533338 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533367 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533379 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533435 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.033422326 +0000 UTC m=+18.829781999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.533898 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534037 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534386 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534404 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534649 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532913 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534863 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534925 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535094 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535209 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535124 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535947 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536125 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536418 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536611 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536696 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536815 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536987 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536958 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537022 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537092 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537189 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537339 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537424 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537445 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537475 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537685 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537772 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.537842 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.037821684 +0000 UTC m=+18.834181357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538001 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538014 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538033 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538146 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538215 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538240 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538292 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538463 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.538546 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.539916 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.539929 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539963 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.540003 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.039990441 +0000 UTC m=+18.836350114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539999 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540074 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534228 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540484 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540519 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540609 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540670 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538641 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540799 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540918 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540985 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541062 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541193 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541520 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541546 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541738 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541754 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538709 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538722 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538748 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538758 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538875 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538952 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527876 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539158 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539404 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539470 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539762 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542185 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542247 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542578 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538674 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527847 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539092 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542819 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542865 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.543383 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.544307 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.545172 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.545954 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.546385 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.546498 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.549948 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.550432 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552486 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552564 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552907 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552926 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.553041 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.553486 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554423 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554689 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554764 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554788 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554825 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554900 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554954 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.555470 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.555570 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.556338 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.559248 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.562246 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.562855 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.563075 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.565339 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.565754 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.566631 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.568871 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.570377 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.571088 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.572424 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.572400 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.574462 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.576201 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.577078 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.578783 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.579686 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.580799 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.581211 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.581870 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.583235 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.583228 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.584434 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.585152 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.586486 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.586965 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588162 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588669 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588789 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588857 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.589322 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.590309 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.590807 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.592142 4913 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.592248 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.593092 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.594067 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.595033 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.595472 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.596934 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.597950 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.598498 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.599563 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.600228 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.600688 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.601829 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.602803 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.602802 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.603388 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.604218 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.604883 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.605732 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.606446 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.607293 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.607758 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.608244 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.609141 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.609895 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.610842 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612333 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-multus\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612373 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6s4k\" (UniqueName: \"kubernetes.io/projected/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-kube-api-access-c6s4k\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612412 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/941d5e91-9bf3-44dc-be69-629cb2516e7c-rootfs\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612442 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-multus-certs\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612469 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-multus\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612516 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-multus-certs\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612580 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/941d5e91-9bf3-44dc-be69-629cb2516e7c-rootfs\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612473 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/941d5e91-9bf3-44dc-be69-629cb2516e7c-proxy-tls\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612718 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-hosts-file\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612760 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612833 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzhr\" (UniqueName: \"kubernetes.io/projected/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-kube-api-access-jpzhr\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612860 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612877 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-hosts-file\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-os-release\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612910 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-hostroot\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612936 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-conf-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612950 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612981 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-k8s-cni-cncf-io\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613006 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-netns\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613034 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-bin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613051 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-os-release\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613063 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-etc-kubernetes\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613081 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-conf-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613011 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-hostroot\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613112 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-k8s-cni-cncf-io\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613107 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-netns\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613129 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/941d5e91-9bf3-44dc-be69-629cb2516e7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613114 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613167 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-etc-kubernetes\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613186 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlg6n\" (UniqueName: \"kubernetes.io/projected/941d5e91-9bf3-44dc-be69-629cb2516e7c-kube-api-access-rlg6n\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613217 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-system-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613213 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-bin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613366 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-system-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613392 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cni-binary-copy\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613436 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-socket-dir-parent\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613458 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cnibin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613474 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-kubelet\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613513 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-daemon-config\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613547 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613562 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cnibin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613647 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613692 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-socket-dir-parent\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613699 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613736 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-kubelet\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613752 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613792 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613802 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613812 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613822 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613832 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613868 4913 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613877 4913 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613887 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613898 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613908 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613941 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613952 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613963 4913 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613973 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613982 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613992 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614027 4913 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614037 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614046 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614056 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614066 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614100 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614111 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614120 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614126 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-daemon-config\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614131 4913 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614163 4913 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614174 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614186 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614198 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614208 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614218 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614220 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cni-binary-copy\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614228 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614287 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614302 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614317 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614331 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614344 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614356 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614369 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614383 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614397 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614410 4913 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614423 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614436 4913 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614451 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614467 4913 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614479 4913 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614490 4913 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614502 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614515 4913 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614527 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614537 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/941d5e91-9bf3-44dc-be69-629cb2516e7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614544 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614598 4913 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614610 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614621 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614630 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614640 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614649 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614760 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614772 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614781 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614792 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614803 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614813 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614823 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614833 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614842 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614893 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614904 4913 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614935 4913 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614944 4913 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614953 4913 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614962 4913 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615005 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615015 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615024 4913 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615032 4913 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615040 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615049 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615057 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615066 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615075 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615083 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615092 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615101 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615110 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615122 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615133 4913 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615141 4913 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615151 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615159 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615167 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615176 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615186 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615197 4913 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615209 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615218 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615228 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615237 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615245 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615254 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615262 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615271 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615281 4913 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615292 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615300 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615308 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615318 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615326 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615334 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615343 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615351 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615359 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615367 4913 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615375 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615385 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615393 4913 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615401 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615409 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615416 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615425 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615432 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615440 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616151 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616166 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616174 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616184 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616192 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616211 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616220 4913 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616228 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616236 4913 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616246 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616258 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616266 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616275 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616285 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616293 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616301 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616309 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616319 4913 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616329 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616339 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616350 4913 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616361 4913 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616372 4913 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616382 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616393 4913 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616403 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616412 4913 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616424 4913 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616433 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616441 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616451 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616460 4913 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616469 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616478 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616487 4913 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616496 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616506 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616515 4913 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616525 4913 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616533 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616542 4913 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616551 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616561 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616569 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616579 4913 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616603 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616613 4913 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616622 4913 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616632 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616640 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616650 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616658 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616668 4913 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616678 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616687 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.618605 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/941d5e91-9bf3-44dc-be69-629cb2516e7c-proxy-tls\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.629404 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6s4k\" (UniqueName: \"kubernetes.io/projected/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-kube-api-access-c6s4k\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.629505 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzhr\" (UniqueName: \"kubernetes.io/projected/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-kube-api-access-jpzhr\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.633778 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlg6n\" (UniqueName: \"kubernetes.io/projected/941d5e91-9bf3-44dc-be69-629cb2516e7c-kube-api-access-rlg6n\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.640197 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.643150 4913 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.651389 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.661861 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.669454 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.680292 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.693427 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.708311 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.718642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.729297 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.733917 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58508->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.733988 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58508->192.168.126.11:17697: read: connection reset by peer" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.733930 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55662->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.734140 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55662->192.168.126.11:17697: read: connection reset by peer" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.734684 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.734716 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.741503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.744320 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2lxrr"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.745006 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.745257 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wfcsc"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.745945 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.746136 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.746304 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.746564 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.746837 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.747776 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751270 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751334 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751334 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751373 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751548 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751823 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.752171 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.754516 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.760773 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.769984 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.773240 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.784142 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.789279 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.792861 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.800138 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.803887 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.809173 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.816978 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.820919 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.820972 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4r7z\" (UniqueName: \"kubernetes.io/projected/60ed8982-ee20-4330-861f-61509c39bbe7-kube-api-access-t4r7z\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821016 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821042 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821064 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821084 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821106 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825352 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825431 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825577 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825695 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825746 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jfg\" (UniqueName: \"kubernetes.io/projected/0e8f223b-fd76-4720-a29f-cb89654e33f5-kube-api-access-h8jfg\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825843 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825889 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825938 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825974 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826012 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826050 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-os-release\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826089 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826132 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826175 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826211 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-cnibin\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826275 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826323 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826353 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826401 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826440 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826482 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826520 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.842292 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.851523 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.864513 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.876127 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.884792 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.894093 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.901661 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.915865 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.926918 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927239 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927291 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927313 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927338 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-os-release\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927361 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927382 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927404 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927432 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-cnibin\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927437 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927463 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927379 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927439 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927505 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927401 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927523 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.927567 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927577 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-os-release\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927623 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-cnibin\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927630 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.927691 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.427670096 +0000 UTC m=+19.224029769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927706 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927767 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927731 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927836 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927875 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927927 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927931 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927960 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927990 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927993 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4r7z\" (UniqueName: \"kubernetes.io/projected/60ed8982-ee20-4330-861f-61509c39bbe7-kube-api-access-t4r7z\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928049 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928088 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928133 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928170 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928216 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928263 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928297 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928327 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928361 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928373 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928383 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928394 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jfg\" (UniqueName: \"kubernetes.io/projected/0e8f223b-fd76-4720-a29f-cb89654e33f5-kube-api-access-h8jfg\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928412 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928421 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928420 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928449 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928481 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928518 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928454 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928777 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928856 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928900 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928988 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.929185 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.932730 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.954547 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.957680 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: W0121 06:35:28.959696 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba WatchSource:0}: Error finding container 6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba: Status 404 returned error can't find the container with id 6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.969615 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4r7z\" (UniqueName: \"kubernetes.io/projected/60ed8982-ee20-4330-861f-61509c39bbe7-kube-api-access-t4r7z\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.974629 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jfg\" (UniqueName: \"kubernetes.io/projected/0e8f223b-fd76-4720-a29f-cb89654e33f5-kube-api-access-h8jfg\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.029057 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:29 crc kubenswrapper[4913]: W0121 06:35:29.029252 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f47ec5_848c_4b9b_9828_8dd3ddb96a18.slice/crio-9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d WatchSource:0}: Error finding container 9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d: Status 404 returned error can't find the container with id 9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.029316 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.029452 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.029407302 +0000 UTC m=+19.825767145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.062476 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.075768 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:29 crc kubenswrapper[4913]: W0121 06:35:29.098107 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e8f223b_fd76_4720_a29f_cb89654e33f5.slice/crio-cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd WatchSource:0}: Error finding container cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd: Status 404 returned error can't find the container with id cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd Jan 21 06:35:29 crc kubenswrapper[4913]: W0121 06:35:29.100098 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe1e161_7227_48ff_824e_01d26e5c7218.slice/crio-2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8 WatchSource:0}: Error finding container 2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8: Status 404 returned error can't find the container with id 2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129633 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129787 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129839 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130037 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130066 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130080 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130134 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.130117579 +0000 UTC m=+19.926477252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130928 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.130896479 +0000 UTC m=+19.927256152 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130968 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.131009 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.131052 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.131153 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.131130095 +0000 UTC m=+19.927489768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.134004 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.134483 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.134453565 +0000 UTC m=+19.930813228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.433224 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.433456 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.433535 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.433515475 +0000 UTC m=+20.229875158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.461237 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:52:35.634672229 +0000 UTC Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.525832 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.526015 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.526105 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.526211 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.641578 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.643444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.643518 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.643534 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fa53d1d26f46d79ba2712bdef32dc78c68266a4dcbf6484c4a4f97fa72127511"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.644962 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jpn7w" event={"ID":"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18","Type":"ContainerStarted","Data":"55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.645020 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jpn7w" event={"ID":"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18","Type":"ContainerStarted","Data":"9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.646347 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.646379 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"b48afd46fe5786572eb363a2c7f5ee1a2f4a64a17faf9e11435f088851553a0f"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.648039 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c" exitCode=0 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.648118 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.648150 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerStarted","Data":"cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.650025 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.650074 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1cf4efd45fe179ee8c2d0c28f1b8b52d745fae6fec8517534c6e79d837dcd3b1"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.652457 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.654179 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5" exitCode=255 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.654246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.657047 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.657109 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.657123 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"f7eeb75a512ebcc9379120edfa63ce55c0ba381fb18a26f4c7e7e0c9e4af6357"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.658415 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" exitCode=0 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.658596 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.658649 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.664017 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.673584 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.688321 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.699738 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.709536 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.721893 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.734009 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.752491 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.756968 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.757681 4913 scope.go:117] "RemoveContainer" containerID="52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.795944 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.852837 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.883846 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.921871 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.944860 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.975303 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.998927 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.013688 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.025057 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.034192 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.042282 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.042397 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.042482 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.042463584 +0000 UTC m=+21.838823257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.054541 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.067219 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.079452 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.093726 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.112526 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.125466 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.137908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.143744 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.143841 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.143808019 +0000 UTC m=+21.940167692 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.143926 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.143973 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.144004 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144110 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144128 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144132 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144140 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144147 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144153 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144148 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144191 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.144184779 +0000 UTC m=+21.940544452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144206 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.144199929 +0000 UTC m=+21.940559602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144229 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.14421154 +0000 UTC m=+21.940571213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.160337 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.174494 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.358038 4913 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.447134 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.447263 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.447317 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.447304817 +0000 UTC m=+22.243664490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.462235 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:47:44.093370318 +0000 UTC Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.526074 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.526189 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.526320 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.526411 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.532518 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.533646 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.534796 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.539561 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.558947 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.577908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.593430 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.609726 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.635585 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.651157 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.665775 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.669884 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.671375 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.671643 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.674262 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.674378 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.675878 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerStarted","Data":"bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.683457 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.696546 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.709920 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.723669 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.732646 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.754836 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.766325 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.779240 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.790236 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.802922 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.821244 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.833844 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.854313 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.892825 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.919642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.936126 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.956470 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.973953 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.994652 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.009732 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.303559 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306330 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306480 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.313726 4913 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.313974 4913 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315218 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315253 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.337151 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340859 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340922 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340956 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.357498 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360778 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360795 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360807 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.376644 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380470 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380515 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.393982 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398317 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.412386 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.413031 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415389 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415521 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.462991 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:20:00.595455093 +0000 UTC Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519226 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.526299 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.526349 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.526534 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.526723 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622888 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622915 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622929 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683623 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683687 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683728 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.685829 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243" exitCode=0 Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.685908 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.687961 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.700089 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.716856 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725516 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725563 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725657 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.728330 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.749415 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.769772 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.780524 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.797653 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.810673 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.823047 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.828929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.828984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.828995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.829013 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.829028 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.839674 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.857326 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.872030 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.884675 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.894690 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.909017 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.923924 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931241 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931274 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931288 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.939730 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.961874 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.978758 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.993759 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.004530 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.015297 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.028247 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033298 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.038337 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cpmwx"] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.038738 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.040667 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.040782 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.040809 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.041311 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.042341 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.063933 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.064109 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.064175 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.064156573 +0000 UTC m=+25.860516246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.073063 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.121820 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135163 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.155534 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165053 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165146 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgdl\" (UniqueName: \"kubernetes.io/projected/440ae0d9-f160-4f49-8b38-61c65d93eea4-kube-api-access-2kgdl\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165181 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165243 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165215729 +0000 UTC m=+25.961575402 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165274 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165287 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165297 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165316 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/440ae0d9-f160-4f49-8b38-61c65d93eea4-host\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165330 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165319272 +0000 UTC m=+25.961678945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165345 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/440ae0d9-f160-4f49-8b38-61c65d93eea4-serviceca\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165392 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165479 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165515 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165507528 +0000 UTC m=+25.961867201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165519 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165530 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165537 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165563 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165554459 +0000 UTC m=+25.961914132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.191271 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.232690 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237272 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237307 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.266952 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/440ae0d9-f160-4f49-8b38-61c65d93eea4-host\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.267005 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/440ae0d9-f160-4f49-8b38-61c65d93eea4-serviceca\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.267104 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgdl\" (UniqueName: \"kubernetes.io/projected/440ae0d9-f160-4f49-8b38-61c65d93eea4-kube-api-access-2kgdl\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.267129 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/440ae0d9-f160-4f49-8b38-61c65d93eea4-host\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.268756 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/440ae0d9-f160-4f49-8b38-61c65d93eea4-serviceca\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.276715 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.308485 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgdl\" (UniqueName: \"kubernetes.io/projected/440ae0d9-f160-4f49-8b38-61c65d93eea4-kube-api-access-2kgdl\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.335262 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.339974 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340017 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340067 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.354064 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: W0121 06:35:32.366165 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod440ae0d9_f160_4f49_8b38_61c65d93eea4.slice/crio-a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5 WatchSource:0}: Error finding container a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5: Status 404 returned error can't find the container with id a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5 Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.374956 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.415315 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.452304 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.463618 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:26:25.330634142 +0000 UTC Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.467957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.468091 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.468143 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.468129523 +0000 UTC m=+26.264489206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.489916 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.525345 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.525432 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.525537 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.525658 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.534241 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.544949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.544984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.544994 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.545012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.545024 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.570786 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.611574 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647064 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647095 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647118 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647127 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.650056 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.693877 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.693968 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8" exitCode=0 Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.694007 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.695625 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cpmwx" event={"ID":"440ae0d9-f160-4f49-8b38-61c65d93eea4","Type":"ContainerStarted","Data":"954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.695657 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cpmwx" event={"ID":"440ae0d9-f160-4f49-8b38-61c65d93eea4","Type":"ContainerStarted","Data":"a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.731253 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.749987 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750040 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750093 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.774526 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.819988 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852208 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852631 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852683 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852700 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852722 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852738 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.897068 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.931192 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.947783 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.958153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.958196 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.958926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.959003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.959030 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.961949 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.972623 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.994748 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.032228 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.061961 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.061990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.061999 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.062012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.062021 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.073274 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.115068 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.156002 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164900 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164908 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164921 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164930 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.196738 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.237451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267439 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267451 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.275156 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.315095 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.352451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369903 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.392928 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.439484 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.464194 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:52:20.854542089 +0000 UTC Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472621 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472667 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.475434 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.517082 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.526065 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.526158 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:33 crc kubenswrapper[4913]: E0121 06:35:33.526227 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:33 crc kubenswrapper[4913]: E0121 06:35:33.526344 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.563522 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575414 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575455 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575506 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.594825 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.632421 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.674938 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677730 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677827 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677920 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.678024 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.703250 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.705795 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30" exitCode=0 Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.705874 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.717036 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.757161 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780234 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780322 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.801165 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.842531 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.876871 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886945 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886963 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886983 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886996 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.914962 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.952055 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990091 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990119 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990130 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.996122 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.033671 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.075488 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092812 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092868 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.120714 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.155145 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.191471 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.194982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195073 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195121 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.279200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.295585 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299755 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.300055 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.323002 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.353416 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403164 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403774 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403916 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.404062 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.411286 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.439150 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.465205 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:44:55.383113781 +0000 UTC Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.474908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506312 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506320 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506332 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.515833 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.525316 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:34 crc kubenswrapper[4913]: E0121 06:35:34.525548 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.525675 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:34 crc kubenswrapper[4913]: E0121 06:35:34.525912 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.559409 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.597468 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609304 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609334 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609351 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.641579 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.679096 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.711724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.711995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.712198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.712371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.712553 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.713431 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7" exitCode=0 Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.713478 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.724905 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.758309 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.795586 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.815993 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816073 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816145 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.838904 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.893915 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.917751 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.958435 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.996233 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023427 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023449 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.049024 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.081232 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.122073 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126501 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126721 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.154088 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.196866 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229780 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229803 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.240931 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.279260 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.314477 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332183 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332242 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332323 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.362634 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435270 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435282 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435310 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.465700 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:12:54.852559078 +0000 UTC Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.525932 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.526003 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:35 crc kubenswrapper[4913]: E0121 06:35:35.526158 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:35 crc kubenswrapper[4913]: E0121 06:35:35.526279 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537428 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537463 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537489 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.721477 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851" exitCode=0 Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.721540 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743804 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743825 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.754724 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.770921 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.785892 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.798057 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.825081 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.840079 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846162 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846208 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846234 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.853790 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.876061 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.894049 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.917536 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.929963 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.939895 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948530 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948625 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.951528 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.961756 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.971381 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.989542 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050764 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050834 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050874 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.105258 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.105360 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.105405 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.105393583 +0000 UTC m=+33.901753256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153210 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153259 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153280 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206459 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.206692 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.206661626 +0000 UTC m=+34.003021339 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206753 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206823 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206882 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207039 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207075 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207111 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207109 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207133 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207162 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207188 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207113 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.207096768 +0000 UTC m=+34.003456471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207248 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.207221801 +0000 UTC m=+34.003581504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207271 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.207260282 +0000 UTC m=+34.003619985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256878 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256982 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360224 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360366 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360413 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360432 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464277 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464328 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.466471 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:03:06.83210215 +0000 UTC Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.509168 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.509388 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.509622 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.509497607 +0000 UTC m=+34.305857310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.525702 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.525781 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.525909 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.526053 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567400 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567439 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567491 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671242 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671365 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.732627 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerStarted","Data":"f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.739462 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.740368 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.740618 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.756949 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774628 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774656 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774703 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.779733 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.781788 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.781905 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.796371 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.835136 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.855486 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.871760 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877417 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.886804 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.908581 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.926417 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.947173 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.961114 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.976165 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981140 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.997352 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.019418 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.039141 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.062870 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.080045 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085252 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.099455 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.115383 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.141778 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.158137 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.172127 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.183957 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188662 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188683 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188701 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.214114 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.234691 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.251126 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.272309 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.288933 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290747 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290763 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290775 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.309852 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.322966 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.333872 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.351200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393380 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393389 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.467003 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:15:20.118592674 +0000 UTC Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497126 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497142 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.525920 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.526059 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:37 crc kubenswrapper[4913]: E0121 06:35:37.526247 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:37 crc kubenswrapper[4913]: E0121 06:35:37.526444 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.599980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600132 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702730 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702799 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.743661 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.805910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.805966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.805983 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.806008 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.806025 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909673 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909743 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909761 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909803 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012762 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012781 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116860 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219513 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219610 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321975 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424155 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424204 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424246 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.467991 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:15:16.428549236 +0000 UTC Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.525443 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.525445 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:38 crc kubenswrapper[4913]: E0121 06:35:38.525579 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:38 crc kubenswrapper[4913]: E0121 06:35:38.525681 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527273 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527325 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527333 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630415 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630481 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733122 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733239 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733257 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.745746 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836629 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836638 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836664 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938671 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938746 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938760 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041703 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041714 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041724 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147704 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147722 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250473 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250581 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354508 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354569 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457926 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.469167 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:52:28.136243248 +0000 UTC Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.525631 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.525634 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:39 crc kubenswrapper[4913]: E0121 06:35:39.525864 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:39 crc kubenswrapper[4913]: E0121 06:35:39.525948 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560717 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560779 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560802 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560856 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663711 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765514 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765665 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765723 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868920 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971948 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075113 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075125 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177755 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177773 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177793 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177810 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280890 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384195 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.395789 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.411995 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.427938 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.459541 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.470187 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:15:58.851719386 +0000 UTC Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488137 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488163 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488208 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.496672 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.515039 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.525483 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.525511 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:40 crc kubenswrapper[4913]: E0121 06:35:40.525692 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:40 crc kubenswrapper[4913]: E0121 06:35:40.525804 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.534085 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.550872 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.568366 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.586728 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591212 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591265 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.603828 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.628986 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.653359 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.680762 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.693975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694021 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694044 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694481 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.710418 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.728874 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.741796 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.752518 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.770949 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.782118 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.792783 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797277 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797320 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.821291 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.844719 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.860161 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.874841 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.894503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899645 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899657 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899689 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.914039 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.928474 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.941747 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.958130 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.973619 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.986934 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001928 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001946 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001972 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001991 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104729 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207178 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207261 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.310905 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.310982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.311004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.311035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.311057 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414442 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414513 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414551 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.470660 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:48:18.700567019 +0000 UTC Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517064 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517757 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517939 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.518132 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.526276 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.526410 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.526658 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.526968 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584142 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584222 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584306 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.603877 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608246 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.624467 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629449 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629515 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629581 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.649090 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653333 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653450 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.672446 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.676940 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677006 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677044 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677061 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.692839 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.692985 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695206 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695251 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797748 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797796 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797805 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797821 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797832 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.899659 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r"] Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.900411 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901484 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901516 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901537 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.903275 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.904958 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.921549 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.938544 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.953637 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966097 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5c2\" (UniqueName: \"kubernetes.io/projected/4aaba44f-534c-4eac-9250-e6e737a701bb-kube-api-access-dd5c2\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966221 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966267 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966365 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4aaba44f-534c-4eac-9250-e6e737a701bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.974038 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.989257 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.001978 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.003951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004033 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004071 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.016429 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.031478 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.046862 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.063275 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067139 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd5c2\" (UniqueName: \"kubernetes.io/projected/4aaba44f-534c-4eac-9250-e6e737a701bb-kube-api-access-dd5c2\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067199 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067228 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067271 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4aaba44f-534c-4eac-9250-e6e737a701bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.068271 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.069039 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.072558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4aaba44f-534c-4eac-9250-e6e737a701bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.075678 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.082875 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd5c2\" (UniqueName: \"kubernetes.io/projected/4aaba44f-534c-4eac-9250-e6e737a701bb-kube-api-access-dd5c2\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.100006 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107646 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107762 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.116402 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.130481 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.143361 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.164702 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.180182 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210065 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210086 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.220409 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: W0121 06:35:42.233901 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaba44f_534c_4eac_9250_e6e737a701bb.slice/crio-0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610 WatchSource:0}: Error finding container 0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610: Status 404 returned error can't find the container with id 0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610 Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312517 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312527 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312554 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414838 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.471292 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:12:41.300696186 +0000 UTC Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518174 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518259 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518310 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.526091 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.526188 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:42 crc kubenswrapper[4913]: E0121 06:35:42.526315 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:42 crc kubenswrapper[4913]: E0121 06:35:42.526445 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622846 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622888 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622906 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725126 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725177 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725196 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.760877 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" event={"ID":"4aaba44f-534c-4eac-9250-e6e737a701bb","Type":"ContainerStarted","Data":"56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.760945 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" event={"ID":"4aaba44f-534c-4eac-9250-e6e737a701bb","Type":"ContainerStarted","Data":"0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.763382 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/0.log" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.767300 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872" exitCode=1 Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.767352 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.768204 4913 scope.go:117] "RemoveContainer" containerID="9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.786927 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.802270 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.807374 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.823326 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827363 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827715 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827963 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.844963 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.856965 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.884033 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.919799 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930157 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930168 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930197 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.937776 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.956390 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.970481 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.982681 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.997565 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.014090 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.026272 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.037906 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.051220 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.060478 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.134992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135042 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135067 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238503 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238543 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341150 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341167 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341179 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443361 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443383 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443394 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.471909 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:14:05.548697659 +0000 UTC Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.525537 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.525668 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:43 crc kubenswrapper[4913]: E0121 06:35:43.525716 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:43 crc kubenswrapper[4913]: E0121 06:35:43.525798 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545740 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648638 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.751967 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752427 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752451 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.775722 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/0.log" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.781001 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.781536 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.783495 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" event={"ID":"4aaba44f-534c-4eac-9250-e6e737a701bb","Type":"ContainerStarted","Data":"3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.802404 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.819159 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.834162 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.852538 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855436 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855506 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855623 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.873440 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.888139 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.901635 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.921442 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.936911 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.957639 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.958977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959097 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.969807 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.984146 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.006345 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.023654 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.037695 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.051368 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061839 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.079287 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.099713 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.119253 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.130020 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.150646 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166101 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166167 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166372 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166415 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166427 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.186198 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.191871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.191960 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.192001 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.191989585 +0000 UTC m=+49.988349258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.196707 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.218283 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.233990 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.249177 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.261963 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268420 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268444 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.279451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.290886 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293193 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293314 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293283898 +0000 UTC m=+50.089643571 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293382 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293428 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293457 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293541 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293549 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293549 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293565 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293572 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293579 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293581 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293598 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293575327 +0000 UTC m=+50.089934990 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293639 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293629028 +0000 UTC m=+50.089988781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293658 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293651149 +0000 UTC m=+50.090010912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.302940 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.316562 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.328176 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.338752 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372640 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.472423 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:32:54.218342734 +0000 UTC Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476138 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476224 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476270 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.525531 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.525559 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.525765 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.525920 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579418 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579457 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.596725 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.596971 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.597134 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.597093785 +0000 UTC m=+50.393453498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683320 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683345 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683355 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.786965 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787025 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787087 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.789179 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.790118 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/0.log" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.793459 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" exitCode=1 Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.793556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.793673 4913 scope.go:117] "RemoveContainer" containerID="9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.794790 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.795078 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.815010 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.832832 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.845975 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.865553 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.888621 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890099 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.907538 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.923907 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.944444 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.964358 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.983959 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993071 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993104 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.000284 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.020570 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.038640 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.063116 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.075513 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.087682 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095337 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.100750 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.198921 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.198982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.199005 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.199034 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.199054 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.301990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302110 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.405865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.405949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.405971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.406002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.406020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.472912 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:50:44.066592592 +0000 UTC Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.508970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509060 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509118 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.526293 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.526361 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:45 crc kubenswrapper[4913]: E0121 06:35:45.526472 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:45 crc kubenswrapper[4913]: E0121 06:35:45.526570 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611378 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611400 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611457 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.713964 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714085 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714107 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.800729 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.805046 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:35:45 crc kubenswrapper[4913]: E0121 06:35:45.805181 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816751 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816813 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.826262 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.844163 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.863503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.881582 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.896075 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.910649 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919147 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919190 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.923230 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.940619 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.952405 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.969511 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.981096 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.004689 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022046 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022167 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022247 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.024394 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.036858 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.051851 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.073157 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.085309 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124379 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124390 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124414 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227738 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227834 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227888 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331058 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331211 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434007 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434085 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.473950 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:33:20.499203814 +0000 UTC Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.525691 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.525730 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:46 crc kubenswrapper[4913]: E0121 06:35:46.525896 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:46 crc kubenswrapper[4913]: E0121 06:35:46.526035 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536484 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536502 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639723 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639752 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742211 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742246 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845116 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845138 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845169 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845193 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948171 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948239 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051273 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051346 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051366 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051392 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051408 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154651 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154673 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257338 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359721 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461934 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461959 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.474372 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:45:36.890455608 +0000 UTC Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.525651 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.525728 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:47 crc kubenswrapper[4913]: E0121 06:35:47.525826 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:47 crc kubenswrapper[4913]: E0121 06:35:47.526039 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565303 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565333 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565347 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667770 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771165 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771277 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771295 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874127 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874156 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874180 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976241 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976255 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976267 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078705 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182116 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285416 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285523 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285539 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389336 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389366 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389380 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.474774 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:10:24.101070046 +0000 UTC Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491714 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491791 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.525756 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:48 crc kubenswrapper[4913]: E0121 06:35:48.525930 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.526082 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:48 crc kubenswrapper[4913]: E0121 06:35:48.526252 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594686 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594699 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697425 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697495 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697517 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697535 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800856 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.903863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.903957 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.903973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.904004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.904020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.006958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007098 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110402 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110412 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110444 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214107 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214135 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214162 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214180 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317379 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317403 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420909 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420971 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.475113 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:46:58.316472197 +0000 UTC Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523498 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523536 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.525845 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.525845 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:49 crc kubenswrapper[4913]: E0121 06:35:49.525992 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:49 crc kubenswrapper[4913]: E0121 06:35:49.526106 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.626986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627066 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627126 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730230 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730372 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730392 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833529 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833730 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937324 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937369 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040542 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143909 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143941 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247189 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350186 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453338 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453422 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.476105 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:43:19.387536289 +0000 UTC Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.525638 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.525676 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:50 crc kubenswrapper[4913]: E0121 06:35:50.525809 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:50 crc kubenswrapper[4913]: E0121 06:35:50.525913 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.539854 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556192 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556273 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556316 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.557789 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.579232 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.598124 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.617372 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.640551 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659161 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659870 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.687373 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.709061 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.734844 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.752255 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762337 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.784320 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.807205 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.825442 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.841177 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865885 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.874102 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.891711 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968407 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968484 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968507 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968534 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968550 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.070891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.070960 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.070982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.071011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.071036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174338 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174416 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174438 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278295 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278375 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381623 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381721 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381779 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.477039 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:38:43.007070755 +0000 UTC Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485368 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485430 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.525917 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.525993 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.526233 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.526387 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.588932 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589028 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589078 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692738 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692769 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795629 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880624 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880753 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.902250 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909493 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909514 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909563 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.930254 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936141 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936162 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.958584 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963368 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963413 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963431 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.983926 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988857 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.201759 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.202005 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204418 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204487 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307935 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307955 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.308001 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411390 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411518 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411569 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411623 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.477707 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:11:16.511495618 +0000 UTC Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514351 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.525725 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.525740 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.525962 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.526078 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617218 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617236 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720269 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720356 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823703 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823721 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823746 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823766 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926679 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926706 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926726 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030193 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133178 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133317 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236803 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236868 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236886 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236931 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340572 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340632 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443718 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443740 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.478323 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:55:01.071176251 +0000 UTC Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.526008 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.526133 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:53 crc kubenswrapper[4913]: E0121 06:35:53.526241 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:53 crc kubenswrapper[4913]: E0121 06:35:53.526421 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546902 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546959 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546981 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650289 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.753963 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754046 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754096 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857418 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857447 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857457 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960147 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960216 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960236 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960251 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063706 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063747 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166781 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166887 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166908 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.269953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.269999 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.270015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.270038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.270053 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373280 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.475978 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476117 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476134 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.479202 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:57:50.436649418 +0000 UTC Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.525809 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.525994 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:54 crc kubenswrapper[4913]: E0121 06:35:54.526376 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:54 crc kubenswrapper[4913]: E0121 06:35:54.526661 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578717 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578745 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682704 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785960 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.786009 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.888995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889063 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889080 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889122 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992633 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095841 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095860 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199399 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199649 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.302971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303102 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409769 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409951 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.480074 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:36:17.041729673 +0000 UTC Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512780 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512809 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.526068 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.526089 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:55 crc kubenswrapper[4913]: E0121 06:35:55.526236 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:55 crc kubenswrapper[4913]: E0121 06:35:55.526392 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616634 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719403 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719431 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822546 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822578 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822616 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924859 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.925002 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027812 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027829 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027842 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.130982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131062 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131086 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131137 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234197 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234343 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234366 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337921 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440757 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440777 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.480484 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:10:53.911449329 +0000 UTC Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.526251 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.526321 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:56 crc kubenswrapper[4913]: E0121 06:35:56.526503 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:56 crc kubenswrapper[4913]: E0121 06:35:56.526646 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543311 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543367 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.646996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647103 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750378 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750436 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.852910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853119 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955832 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955959 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.058923 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.058986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.059004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.059027 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.059048 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161508 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161563 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161616 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264125 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366516 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366572 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366625 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366638 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469546 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469670 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.480640 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:57:14.629388133 +0000 UTC Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.525320 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.525325 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:57 crc kubenswrapper[4913]: E0121 06:35:57.525539 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:57 crc kubenswrapper[4913]: E0121 06:35:57.525643 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.571992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572076 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572094 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675385 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675404 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.778964 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779070 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779122 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882826 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882960 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882985 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.985991 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986044 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986071 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089155 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089264 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191872 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191965 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191987 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295083 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295140 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397940 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397952 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.480794 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:35:30.104732182 +0000 UTC Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501107 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501262 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.525842 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.525853 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:58 crc kubenswrapper[4913]: E0121 06:35:58.526026 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:58 crc kubenswrapper[4913]: E0121 06:35:58.526192 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603304 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603347 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706831 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706861 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706881 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809160 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809234 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809263 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809279 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014963 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116872 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116881 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220113 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220137 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220155 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323544 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426463 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.481162 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:29:58.084062284 +0000 UTC Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.525888 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.525959 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:59 crc kubenswrapper[4913]: E0121 06:35:59.526339 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:59 crc kubenswrapper[4913]: E0121 06:35:59.526557 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.526721 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529852 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634897 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634919 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634946 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634966 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.738443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739408 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844332 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844350 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947209 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947295 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051611 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155255 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155304 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155357 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155397 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259423 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259448 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259458 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.287159 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.287340 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.287427 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.287406773 +0000 UTC m=+82.083766446 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362365 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362532 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362553 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362577 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362634 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388377 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.388669 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.388627573 +0000 UTC m=+82.184987286 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388790 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388894 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388962 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389053 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389094 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389118 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389212 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389212 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389217 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.389186378 +0000 UTC m=+82.185546241 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389362 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.389331432 +0000 UTC m=+82.185691145 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389237 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389399 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389495 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.389475065 +0000 UTC m=+82.185834968 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465717 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465728 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465760 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.481880 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:01:23.151106646 +0000 UTC Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.525631 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.525804 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.525854 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.526064 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.544727 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.562420 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.567989 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568066 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568113 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.577205 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.593366 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.613397 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.636244 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.648805 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670854 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670876 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670903 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670927 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.676313 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.692465 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.692606 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.692659 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.692642776 +0000 UTC m=+82.489002469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.699246 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.715503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.732031 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.759078 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.773992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774118 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.775791 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.796527 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.820379 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.842183 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.867782 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.876949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877216 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877352 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877714 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.980445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.980749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.980939 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.981082 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.981214 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084706 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188320 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291169 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291252 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291264 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393534 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.483059 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:43:09.190514064 +0000 UTC Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496008 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496087 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.525719 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.525724 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:01 crc kubenswrapper[4913]: E0121 06:36:01.525932 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:01 crc kubenswrapper[4913]: E0121 06:36:01.526038 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599631 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599703 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703494 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703509 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806373 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806465 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.863582 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.867048 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.867631 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.893521 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908420 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908446 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908456 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.911531 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.927164 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.942083 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.962693 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.980724 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.996280 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010674 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010720 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010742 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.014887 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.032822 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.047715 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.060170 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.072703 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.084729 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.101050 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.112994 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113003 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113221 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113240 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.125302 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.134860 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215569 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215682 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215707 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.229955 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230091 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.250319 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255168 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255199 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255220 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.275063 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279366 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.293780 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299098 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299176 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.320194 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324861 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324898 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324923 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.338074 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.338183 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339796 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339845 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442338 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.484020 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:07:50.210943591 +0000 UTC Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.525799 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.525812 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.526042 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.526162 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545447 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649412 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649651 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649672 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753454 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.856976 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857071 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857123 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.875422 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.876097 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.879938 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" exitCode=1 Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.879996 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.880045 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.882813 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.883331 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.903045 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.934847 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.952231 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961192 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961315 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.988210 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.012856 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.031810 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.054198 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065354 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065513 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.075562 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.098642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.121726 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.151052 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.169133 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.169408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.170251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.170306 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.170327 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.178537 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.203431 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.220131 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.242315 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.265002 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273396 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273465 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273486 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273499 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.283448 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377656 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377707 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481369 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481477 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481536 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.484674 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:36:41.774589044 +0000 UTC Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.526291 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.526404 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:03 crc kubenswrapper[4913]: E0121 06:36:03.526508 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:03 crc kubenswrapper[4913]: E0121 06:36:03.526639 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584662 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584715 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687926 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797666 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797729 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.886780 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.891842 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:03 crc kubenswrapper[4913]: E0121 06:36:03.892099 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899920 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899950 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.900002 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.914399 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.932508 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.951083 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.978204 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.998196 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.003726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.003986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.004133 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.004257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.004384 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.022072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.038465 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.070374 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.089528 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107690 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107727 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.122936 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.140311 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.156058 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.171099 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.190269 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.208072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210138 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210239 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.227663 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.248359 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313752 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313770 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313793 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313811 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416666 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416714 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416734 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.484842 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:18:42.927038033 +0000 UTC Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519773 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519790 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519811 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519829 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.526198 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.526205 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:04 crc kubenswrapper[4913]: E0121 06:36:04.526387 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:04 crc kubenswrapper[4913]: E0121 06:36:04.526522 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.622957 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623018 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623079 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.637879 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.651661 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.656904 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.678175 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.699402 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.724725 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726623 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726757 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.746033 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.765455 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.782626 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.797940 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.812392 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830280 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830358 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830388 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.831510 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.842757 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.861876 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.876042 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.891079 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.904941 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933702 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933740 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.934522 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.950077 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.036898 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.036981 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.036997 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.037021 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.037039 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140922 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140960 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243384 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243569 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346503 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346726 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449774 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.485665 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:32:52.012375162 +0000 UTC Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.526143 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.526152 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:05 crc kubenswrapper[4913]: E0121 06:36:05.526364 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:05 crc kubenswrapper[4913]: E0121 06:36:05.526501 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553014 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553083 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553154 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656362 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.758979 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759084 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862632 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862739 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966260 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966280 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966328 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069410 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069517 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069535 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.172973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276757 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380632 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380719 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.483974 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484024 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484068 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.486654 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:54:25.43018529 +0000 UTC Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.526280 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.526351 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:06 crc kubenswrapper[4913]: E0121 06:36:06.526505 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:06 crc kubenswrapper[4913]: E0121 06:36:06.526678 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587425 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587455 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691440 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.794918 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.794969 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.794988 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.795017 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.795036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897781 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897881 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897914 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897936 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001118 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103557 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103628 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206520 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206611 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206622 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206663 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310186 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310209 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310226 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413504 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413614 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413648 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.487716 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:55:46.184774315 +0000 UTC Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516500 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516631 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516717 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.525856 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:07 crc kubenswrapper[4913]: E0121 06:36:07.526002 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.526057 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:07 crc kubenswrapper[4913]: E0121 06:36:07.526414 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620236 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620259 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620289 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620311 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723764 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723808 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723821 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723851 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930252 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930400 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.033937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.033988 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.034000 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.034018 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.034029 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.136938 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.136998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.137015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.137040 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.137058 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240898 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343916 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.344005 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447395 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447413 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.487880 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:37:43.357887901 +0000 UTC Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.525284 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:08 crc kubenswrapper[4913]: E0121 06:36:08.525403 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.525507 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:08 crc kubenswrapper[4913]: E0121 06:36:08.525857 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549850 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549872 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652584 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652659 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652679 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755804 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859419 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859473 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859499 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962455 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962534 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065620 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065633 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.167912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.167959 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.167971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.168009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.168032 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.271727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272373 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272429 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377465 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377685 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.480932 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.480980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.480996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.481019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.481036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.488320 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:42:23.999565651 +0000 UTC Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.526238 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.526238 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:09 crc kubenswrapper[4913]: E0121 06:36:09.526389 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:09 crc kubenswrapper[4913]: E0121 06:36:09.526468 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584024 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584085 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584138 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688137 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688162 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688190 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688215 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791572 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791640 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791674 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791686 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894955 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894978 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998630 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998671 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998691 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102744 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102868 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102888 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207219 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207318 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207330 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.309863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.309944 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.309963 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.310009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.310036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412245 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412278 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.488501 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:58:39.192825922 +0000 UTC Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515098 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515441 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515641 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.525779 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:10 crc kubenswrapper[4913]: E0121 06:36:10.526005 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.526144 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:10 crc kubenswrapper[4913]: E0121 06:36:10.526348 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.546118 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.566378 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.591803 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.609443 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618192 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618234 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618248 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618258 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.629661 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.649306 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.666575 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.678893 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.693666 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.706068 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.719992 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.721894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722096 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722474 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722656 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.734091 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.747393 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.771144 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.785651 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.802102 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.816714 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825407 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825998 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.835801 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928818 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.929123 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.031875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032357 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032437 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.134672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.134966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.135050 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.135130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.135194 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237623 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237682 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339819 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442399 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442472 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442527 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.488835 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:49:20.228411414 +0000 UTC Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.525578 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.525637 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:11 crc kubenswrapper[4913]: E0121 06:36:11.525717 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:11 crc kubenswrapper[4913]: E0121 06:36:11.525814 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647239 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647326 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751403 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751496 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854046 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854072 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854081 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956430 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956518 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956530 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956561 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059454 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161926 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264318 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367256 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367325 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367365 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.489765 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:27:09.543284341 +0000 UTC Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491549 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491578 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491601 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491614 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491623 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.525405 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.525448 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.525615 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.525968 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586700 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.606495 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611422 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.630952 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635369 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.651328 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654610 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654686 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.672413 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676881 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676895 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676924 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.689245 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.689373 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691070 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691102 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793465 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897529 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897619 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000200 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000262 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000311 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000331 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103884 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206001 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206069 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308709 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411609 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.490254 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:43:18.611832908 +0000 UTC Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514396 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514436 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514449 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514479 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.525879 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.526007 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:13 crc kubenswrapper[4913]: E0121 06:36:13.526118 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:13 crc kubenswrapper[4913]: E0121 06:36:13.526271 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.616891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.616989 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.617011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.617034 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.617050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719446 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719463 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822886 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925567 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028684 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028762 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028775 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132155 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132204 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132245 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235143 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235540 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235728 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.236014 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.338732 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.442229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.442559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.442771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.443112 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.443401 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.490782 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:36:55.421048047 +0000 UTC Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.526113 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.526212 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:14 crc kubenswrapper[4913]: E0121 06:36:14.526254 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:14 crc kubenswrapper[4913]: E0121 06:36:14.526427 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546881 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.649693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650200 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650284 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.753928 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.753985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.754003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.754026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.754044 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856360 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958253 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958313 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060343 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060645 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060834 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060919 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163790 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.266906 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267269 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267385 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267470 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369939 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369983 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369997 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.370006 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472530 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472572 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.491672 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:08:53.693701998 +0000 UTC Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.526191 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.526201 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:15 crc kubenswrapper[4913]: E0121 06:36:15.526358 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:15 crc kubenswrapper[4913]: E0121 06:36:15.526461 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575121 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575164 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677797 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677807 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677826 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677837 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781208 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781270 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781307 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885197 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940092 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/0.log" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940168 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" containerID="9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd" exitCode=1 Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940218 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerDied","Data":"9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940934 4913 scope.go:117] "RemoveContainer" containerID="9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.963088 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.979053 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988200 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988418 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.012364 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.026943 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.038182 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.049155 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.065502 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.079458 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091245 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091263 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091276 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.094322 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.110489 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.123091 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.136850 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.150769 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.165425 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.182549 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193918 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193992 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.195451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.209688 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.219490 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296702 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296754 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.399756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400097 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400177 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400323 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.492706 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:19:45.109034139 +0000 UTC Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503387 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.525690 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:16 crc kubenswrapper[4913]: E0121 06:36:16.525801 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.525697 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:16 crc kubenswrapper[4913]: E0121 06:36:16.526140 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.526441 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:16 crc kubenswrapper[4913]: E0121 06:36:16.526631 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605815 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605823 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605843 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708880 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708906 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708915 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811255 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811267 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811299 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913792 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.914094 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.945055 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/0.log" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.945101 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.959322 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.972606 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.984991 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.997037 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.009377 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016331 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.025007 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.046467 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.056886 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.068976 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.083003 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.095062 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.104076 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.112751 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118186 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118245 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118260 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118270 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.128636 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.140157 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.156382 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.168126 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.178633 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220568 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.323035 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.426063 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.493005 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:22:35.551264469 +0000 UTC Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.525436 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.525495 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:17 crc kubenswrapper[4913]: E0121 06:36:17.525750 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:17 crc kubenswrapper[4913]: E0121 06:36:17.525825 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528731 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528800 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630933 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630951 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733095 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733117 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733130 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835389 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835427 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937630 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937642 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937673 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040386 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040691 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040957 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.142944 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.142985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.142995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.143012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.143022 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245577 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348795 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348838 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.450999 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451072 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451096 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451113 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.493302 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:29:33.778196578 +0000 UTC Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.526372 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.526399 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:18 crc kubenswrapper[4913]: E0121 06:36:18.526492 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:18 crc kubenswrapper[4913]: E0121 06:36:18.526618 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553657 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553682 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655613 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655702 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759278 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759320 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861610 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861619 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861632 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861641 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963248 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963306 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963318 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066612 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169314 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169324 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272131 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374613 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374638 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374655 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477070 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477080 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477101 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.493698 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:54:01.350311845 +0000 UTC Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.526170 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:19 crc kubenswrapper[4913]: E0121 06:36:19.526283 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.526293 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:19 crc kubenswrapper[4913]: E0121 06:36:19.526576 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579720 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579802 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579829 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579847 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682692 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785226 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785409 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785658 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888482 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991829 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.094511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.094865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.094956 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.095079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.095168 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198156 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198196 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198206 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198232 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.300664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.301012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.301359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.301701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.302005 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405115 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405136 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405146 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.494767 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:04:28.021630153 +0000 UTC Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507803 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507811 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507826 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507840 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.526247 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:20 crc kubenswrapper[4913]: E0121 06:36:20.526382 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.526529 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:20 crc kubenswrapper[4913]: E0121 06:36:20.526619 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.541832 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.566897 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.581100 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.593548 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.604875 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609222 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609256 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609287 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.623103 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.635256 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.647642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.660608 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.674103 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.685721 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.698721 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.709365 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711475 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711548 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.717652 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.737510 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.748923 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.759552 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.768383 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813410 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813449 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915665 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915677 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915706 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017621 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017639 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017659 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017675 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120606 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120619 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120645 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222415 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222470 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222524 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324633 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324646 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426917 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426948 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.495526 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:15:26.266223202 +0000 UTC Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.525300 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.525339 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:21 crc kubenswrapper[4913]: E0121 06:36:21.525430 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:21 crc kubenswrapper[4913]: E0121 06:36:21.525526 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529465 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529544 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631480 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631559 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.733733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.733985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.734054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.734114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.734188 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.836844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837146 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837459 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837526 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939619 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041909 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041973 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.144780 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145199 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145385 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145736 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248723 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248752 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351691 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454500 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454530 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.496060 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:21:04.792374448 +0000 UTC Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.525364 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.525401 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.525514 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.525624 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557261 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659640 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694457 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694494 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694765 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.712431 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715738 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715774 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715790 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715799 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.727729 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.732951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733021 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.750353 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753782 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753797 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753809 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.767006 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.771566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.771911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.772123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.772283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.772453 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.787738 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.787884 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789367 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.892919 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.893322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.893857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.894104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.894340 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.996977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997058 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100392 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100448 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100925 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.203857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204091 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.306990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307061 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409727 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.496648 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:43:23.457925833 +0000 UTC Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512807 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512885 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512935 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512956 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.526068 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.526115 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:23 crc kubenswrapper[4913]: E0121 06:36:23.526232 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:23 crc kubenswrapper[4913]: E0121 06:36:23.526358 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616189 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721345 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721395 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721412 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824624 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824759 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928219 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928278 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928304 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.030888 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.030962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.030980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.031003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.031020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133742 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236425 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236450 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.339882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340806 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443253 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443323 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.497361 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:24:35.826473691 +0000 UTC Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.525984 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.525987 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:24 crc kubenswrapper[4913]: E0121 06:36:24.526227 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:24 crc kubenswrapper[4913]: E0121 06:36:24.526109 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545941 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.647954 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.647986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.647994 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.648007 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.648018 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750302 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750359 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852903 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852916 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955779 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955817 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058515 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058576 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265783 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265878 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.368991 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369040 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369081 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472383 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.497939 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:08:59.95358194 +0000 UTC Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.525288 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:25 crc kubenswrapper[4913]: E0121 06:36:25.525478 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.525288 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:25 crc kubenswrapper[4913]: E0121 06:36:25.526236 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575365 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575423 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.679009 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781720 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781762 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.884976 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885092 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.987974 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988022 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988077 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090638 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090671 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193691 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193758 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297207 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297247 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398811 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398836 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398855 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398864 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.498787 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:08:36.040956554 +0000 UTC Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502384 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502447 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502485 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.526075 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.526112 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:26 crc kubenswrapper[4913]: E0121 06:36:26.526304 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:26 crc kubenswrapper[4913]: E0121 06:36:26.526380 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605369 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605503 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605521 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709129 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709177 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709243 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813731 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813796 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813856 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916921 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020352 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020392 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123831 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227393 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330823 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433628 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433645 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.499995 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:00:45.517965816 +0000 UTC Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.525900 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.526009 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:27 crc kubenswrapper[4913]: E0121 06:36:27.526153 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:27 crc kubenswrapper[4913]: E0121 06:36:27.526550 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537024 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537204 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537249 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.543422 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640377 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640415 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743142 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743168 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743179 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.845958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846000 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846016 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949141 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949165 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949183 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052132 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155553 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155670 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258492 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258537 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361479 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465773 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465876 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465920 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.500736 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:04:05.213587392 +0000 UTC Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.527567 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.527686 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:28 crc kubenswrapper[4913]: E0121 06:36:28.527964 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:28 crc kubenswrapper[4913]: E0121 06:36:28.528396 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568263 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568275 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671527 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671575 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774715 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774742 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774766 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877430 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877492 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877514 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877563 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980825 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980931 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083758 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083825 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083880 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083901 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186882 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289497 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289571 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289630 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289648 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393360 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497210 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497233 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.501457 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:06:11.245681834 +0000 UTC Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.526330 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:29 crc kubenswrapper[4913]: E0121 06:36:29.526538 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.526358 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:29 crc kubenswrapper[4913]: E0121 06:36:29.527196 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.527516 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601363 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601420 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.704984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705044 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705065 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705120 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807662 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807704 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910386 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012691 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012714 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115673 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115690 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115729 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218819 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424682 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424728 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424752 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.501652 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:26:10.301602571 +0000 UTC Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.525744 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:30 crc kubenswrapper[4913]: E0121 06:36:30.525897 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.525988 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:30 crc kubenswrapper[4913]: E0121 06:36:30.526193 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527747 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527816 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527841 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527859 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.543298 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.562387 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.579377 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.596357 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.612700 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.629738 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630062 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630134 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.649263 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.663147 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.684210 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.704736 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.725200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.740929 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.758075 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.776756 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.791729 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.806191 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.834050 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.835950 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836076 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.849978 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.883553 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938762 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938779 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.006689 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.010825 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.011628 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.032769 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042336 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042372 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042410 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.045649 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.064622 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.083698 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.095632 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.111200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.128173 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.143886 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144607 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144624 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144635 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.153947 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.170806 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.184310 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.197521 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.210300 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.224203 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248243 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.254661 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.273425 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.288729 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.305787 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.336740 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351214 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351233 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454071 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454117 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454136 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454175 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.502676 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:49:13.070990405 +0000 UTC Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.526201 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.526295 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:31 crc kubenswrapper[4913]: E0121 06:36:31.526344 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:31 crc kubenswrapper[4913]: E0121 06:36:31.526518 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557274 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659706 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659735 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761700 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761730 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761759 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864341 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864354 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966827 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966854 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069264 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069274 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069298 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172340 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172407 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172430 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172480 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274874 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274903 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274913 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.374264 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.374448 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.374545 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.37452428 +0000 UTC m=+146.170884043 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378110 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378157 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378184 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475146 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475293 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.475267167 +0000 UTC m=+146.271626870 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475455 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475533 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475758 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475793 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475811 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475814 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475872 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.475856924 +0000 UTC m=+146.272216627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475894 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475924 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.476024 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.475990417 +0000 UTC m=+146.272350130 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.476077 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.476137 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.47612115 +0000 UTC m=+146.272480863 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481358 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481478 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.503621 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:56:57.545761768 +0000 UTC Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.525377 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.525520 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.525742 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.525876 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583878 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583887 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583908 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686846 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686901 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686921 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.780038 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.780211 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.780265 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.780248766 +0000 UTC m=+146.576608449 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790282 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.892914 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.892975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.892986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.893002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.893013 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946753 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946898 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946922 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.967051 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971776 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971878 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.994972 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998846 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.015658 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.017628 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.018249 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020096 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020143 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020195 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.022189 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" exitCode=1 Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.022254 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.022313 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.023947 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.024394 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.041712 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.044705 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047269 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047293 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047312 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.062216 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.066388 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.066575 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068611 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068626 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.084861 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.102964 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.117746 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.133388 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.150664 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.162036 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171606 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.173180 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.193311 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"shift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 06:36:32.066654 6993 services_controller.go:452] Built service openshift-authentication/oauth-openshift per-node LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066664 6993 services_controller.go:453] Built service openshift-authentication/oauth-openshift template LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066674 6993 services_controller.go:454] Service openshift-authentication/oauth-openshift for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0121 06:36:32.066641 6993 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.208239 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.241304 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.261914 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274372 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.283257 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.302911 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.321292 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.340530 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.355985 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.375414 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377014 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377067 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377086 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377137 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479343 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479403 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479425 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.504112 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:10:46.863872613 +0000 UTC Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.525374 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.525393 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.525634 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.525788 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582644 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.685947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789855 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789901 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789914 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789942 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893174 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893191 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996566 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.027902 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105820 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105882 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208708 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311458 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311468 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311483 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311493 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415485 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.505153 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:10:05.103975164 +0000 UTC Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519815 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519827 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.526015 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.526041 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:34 crc kubenswrapper[4913]: E0121 06:36:34.526134 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:34 crc kubenswrapper[4913]: E0121 06:36:34.526362 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622949 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725456 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725466 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827807 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827823 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827863 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930507 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930569 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033874 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033906 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033943 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137361 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137498 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137522 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240666 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240751 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240774 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240791 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343292 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343357 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.445973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446027 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446107 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.505707 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:58:42.202589035 +0000 UTC Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.525366 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.525363 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:35 crc kubenswrapper[4913]: E0121 06:36:35.525902 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:35 crc kubenswrapper[4913]: E0121 06:36:35.526005 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548805 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548829 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548861 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548883 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651549 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651622 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651665 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754668 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858532 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858654 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858840 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.962172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.962784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.963012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.963325 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.963469 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066759 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066900 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169497 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169520 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169560 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272907 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376377 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376416 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376433 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479067 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479108 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479124 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.506496 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:18:02.029288881 +0000 UTC Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.526305 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.526324 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:36 crc kubenswrapper[4913]: E0121 06:36:36.526550 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:36 crc kubenswrapper[4913]: E0121 06:36:36.526700 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582640 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582705 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582764 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686131 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686148 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.788852 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789365 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789447 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789509 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893207 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893761 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.894050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.996970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997033 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997096 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106202 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106255 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.209716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210389 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210568 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.313933 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.313995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.314012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.314036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.314053 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416966 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.507486 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:59:26.659678856 +0000 UTC Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520625 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520643 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520683 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.525822 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:37 crc kubenswrapper[4913]: E0121 06:36:37.526036 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.526172 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:37 crc kubenswrapper[4913]: E0121 06:36:37.526411 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.623614 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.623734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.623760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.624243 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.624529 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728627 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832095 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832111 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832135 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832152 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935118 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935197 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935256 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037915 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037933 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037957 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037976 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.140985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141099 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244158 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244214 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244237 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347050 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347107 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347129 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347156 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347176 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450124 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450219 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450264 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.508186 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:59:31.168726376 +0000 UTC Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.526170 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.526236 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:38 crc kubenswrapper[4913]: E0121 06:36:38.526462 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:38 crc kubenswrapper[4913]: E0121 06:36:38.526627 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552473 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552629 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552656 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.654920 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.654980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.654996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.655019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.655038 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758158 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758210 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758227 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758267 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.861966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862042 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862065 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862116 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965632 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068239 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170160 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170214 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272458 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272566 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.375975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376087 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479121 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479199 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.508686 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:30:29.612798707 +0000 UTC Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.526114 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.526228 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:39 crc kubenswrapper[4913]: E0121 06:36:39.526316 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:39 crc kubenswrapper[4913]: E0121 06:36:39.526725 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582358 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.685890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.685964 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.685987 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.686013 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.686034 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788386 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788501 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788518 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890804 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890947 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994311 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994364 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097577 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097668 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200664 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303887 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303902 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303921 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303933 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407577 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407678 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.509262 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:59:54.75291436 +0000 UTC Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511060 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511076 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511116 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.525726 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.525769 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:40 crc kubenswrapper[4913]: E0121 06:36:40.525894 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:40 crc kubenswrapper[4913]: E0121 06:36:40.525997 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.548369 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.570050 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.585215 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621383 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621468 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621543 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.654409 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.677445 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.692538 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.705072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723755 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723771 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.727072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"shift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 06:36:32.066654 6993 services_controller.go:452] Built service openshift-authentication/oauth-openshift per-node LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066664 6993 services_controller.go:453] Built service openshift-authentication/oauth-openshift template LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066674 6993 services_controller.go:454] Service openshift-authentication/oauth-openshift for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0121 06:36:32.066641 6993 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.744330 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.755314 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.767812 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.783136 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.796967 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.806621 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.816908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825612 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825625 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.832888 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.847071 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.858490 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.872035 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928563 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928736 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032785 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032827 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.139892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.139977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.140001 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.140029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.140050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.242956 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243093 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345751 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345873 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345897 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448267 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448346 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.510128 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:35:51.118793718 +0000 UTC Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.525679 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:41 crc kubenswrapper[4913]: E0121 06:36:41.525866 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.525692 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:41 crc kubenswrapper[4913]: E0121 06:36:41.525997 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552214 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552266 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552330 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656144 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656221 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656287 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759420 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880492 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880684 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984427 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984726 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087919 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087934 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087966 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191718 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191759 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191777 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294922 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294932 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294963 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397248 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397264 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397289 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397310 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500320 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500475 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.510548 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:36:35.989284474 +0000 UTC Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.526070 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.526119 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:42 crc kubenswrapper[4913]: E0121 06:36:42.526237 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:42 crc kubenswrapper[4913]: E0121 06:36:42.526352 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603553 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603674 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603704 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603727 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706831 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706879 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810704 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810778 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810795 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810846 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913480 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913501 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017058 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121406 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121442 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224948 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.225020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.327997 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328063 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328080 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359380 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359439 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.381200 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386001 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386120 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.405921 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411212 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411373 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.431664 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436818 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436850 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.457451 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467017 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467327 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.485319 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.485556 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.487936 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.487991 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.488009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.488033 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.488049 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.511228 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:44:50.722343787 +0000 UTC Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.525725 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.525766 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.525885 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.526041 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590620 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590721 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590748 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590768 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.693482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.693830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.693988 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.694145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.694288 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797419 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900182 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900243 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900302 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003629 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003648 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106324 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106378 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209456 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312854 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312971 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456493 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.512072 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:58:08.512731489 +0000 UTC Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.526447 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.526453 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:44 crc kubenswrapper[4913]: E0121 06:36:44.526719 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:44 crc kubenswrapper[4913]: E0121 06:36:44.526859 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559815 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559873 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559915 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559933 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.663357 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.663754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.663896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.664140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.664354 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768977 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872142 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974827 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974901 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078028 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078127 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078144 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181574 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284344 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284369 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284412 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387578 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387633 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387652 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489557 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489636 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.512909 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:10:31.659767836 +0000 UTC Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.525492 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.525554 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:45 crc kubenswrapper[4913]: E0121 06:36:45.525699 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:45 crc kubenswrapper[4913]: E0121 06:36:45.525777 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592008 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592122 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592140 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.694996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695363 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695535 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798758 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798911 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901097 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901110 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901118 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003464 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003557 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003576 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106812 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106944 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210138 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313122 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313146 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313166 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415845 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.513328 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:21:01.932002805 +0000 UTC Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.520961 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521027 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521073 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521091 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.525326 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.525422 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:46 crc kubenswrapper[4913]: E0121 06:36:46.525518 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:46 crc kubenswrapper[4913]: E0121 06:36:46.525707 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624757 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624805 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728764 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728841 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728884 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831508 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934272 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036792 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036824 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139506 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139596 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139606 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241968 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.343879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.343953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.343975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.344002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.344026 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446758 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446793 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.513938 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:52:20.229302984 +0000 UTC Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.526407 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.526424 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:47 crc kubenswrapper[4913]: E0121 06:36:47.526935 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:47 crc kubenswrapper[4913]: E0121 06:36:47.527151 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.527298 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:36:47 crc kubenswrapper[4913]: E0121 06:36:47.527543 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.541902 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549345 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549474 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549777 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.556960 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.573176 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.589660 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.604164 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.623368 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"shift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 06:36:32.066654 6993 services_controller.go:452] Built service openshift-authentication/oauth-openshift per-node LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066664 6993 services_controller.go:453] Built service openshift-authentication/oauth-openshift template LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066674 6993 services_controller.go:454] Service openshift-authentication/oauth-openshift for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0121 06:36:32.066641 6993 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.639497 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652346 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652500 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652885 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.671514 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.689989 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.711246 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.733777 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.750718 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755815 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.763463 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.775986 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.791102 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.803677 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.827319 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.850893 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859302 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859434 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.875163 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962769 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.065793 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066207 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066669 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169702 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273212 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273277 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376129 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376416 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.479884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480302 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480659 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480849 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.514516 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:24:10.880352614 +0000 UTC Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.526053 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.526112 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:48 crc kubenswrapper[4913]: E0121 06:36:48.526233 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:48 crc kubenswrapper[4913]: E0121 06:36:48.526882 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.584023 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.686787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687326 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687458 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791483 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.792034 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896128 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896146 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896192 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999523 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999583 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103303 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103322 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.206824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207370 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.310882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311809 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.514992 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:26:48.979447129 +0000 UTC Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518631 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.525895 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.525970 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:49 crc kubenswrapper[4913]: E0121 06:36:49.526067 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:49 crc kubenswrapper[4913]: E0121 06:36:49.526198 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621507 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621527 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621630 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725252 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725332 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725358 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725376 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829202 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829221 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829272 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.932211 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.932583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.932818 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.933039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.933259 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037334 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037377 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140679 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140732 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244323 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244493 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348293 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348334 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451489 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451516 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.516186 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:31:40.621985324 +0000 UTC Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.525801 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:50 crc kubenswrapper[4913]: E0121 06:36:50.526536 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.527017 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:50 crc kubenswrapper[4913]: E0121 06:36:50.527167 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.544573 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555475 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.563752 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.584418 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659435 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659518 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659561 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.697811 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.697781267 podStartE2EDuration="1m21.697781267s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.677304111 +0000 UTC m=+100.473663824" watchObservedRunningTime="2026-01-21 06:36:50.697781267 +0000 UTC m=+100.494140970" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.715389 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jpn7w" podStartSLOduration=82.715362276 podStartE2EDuration="1m22.715362276s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.715354626 +0000 UTC m=+100.511714339" watchObservedRunningTime="2026-01-21 06:36:50.715362276 +0000 UTC m=+100.511721979" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763873 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763923 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763936 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763954 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763968 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.766938 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" podStartSLOduration=82.766910822 podStartE2EDuration="1m22.766910822s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.747417372 +0000 UTC m=+100.543777075" watchObservedRunningTime="2026-01-21 06:36:50.766910822 +0000 UTC m=+100.563270525" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.803857 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gn6lz" podStartSLOduration=82.803822757 podStartE2EDuration="1m22.803822757s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.786093243 +0000 UTC m=+100.582452916" watchObservedRunningTime="2026-01-21 06:36:50.803822757 +0000 UTC m=+100.600182440" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.804474 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cpmwx" podStartSLOduration=81.804464993 podStartE2EDuration="1m21.804464993s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.80205634 +0000 UTC m=+100.598416073" watchObservedRunningTime="2026-01-21 06:36:50.804464993 +0000 UTC m=+100.600824666" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.866371 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.866351926 podStartE2EDuration="1m22.866351926s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.866217502 +0000 UTC m=+100.662577185" watchObservedRunningTime="2026-01-21 06:36:50.866351926 +0000 UTC m=+100.662711619" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.866538 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.86653183 podStartE2EDuration="1m18.86653183s" podCreationTimestamp="2026-01-21 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.837407013 +0000 UTC m=+100.633766726" watchObservedRunningTime="2026-01-21 06:36:50.86653183 +0000 UTC m=+100.662891513" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867783 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.884049 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podStartSLOduration=82.884019546 podStartE2EDuration="1m22.884019546s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.883156054 +0000 UTC m=+100.679515727" watchObservedRunningTime="2026-01-21 06:36:50.884019546 +0000 UTC m=+100.680379239" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975160 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975174 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975213 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079640 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182690 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182705 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285950 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285969 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389959 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493799 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.517071 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:47:59.220272372 +0000 UTC Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.526502 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:51 crc kubenswrapper[4913]: E0121 06:36:51.526802 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.527207 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:51 crc kubenswrapper[4913]: E0121 06:36:51.527370 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596785 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596853 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700318 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803163 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803333 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906633 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906758 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010157 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010182 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010237 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113226 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113367 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216112 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216171 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318668 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421604 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.517619 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:13:22.658497583 +0000 UTC Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524513 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524585 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.525726 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.525808 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:52 crc kubenswrapper[4913]: E0121 06:36:52.525901 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:52 crc kubenswrapper[4913]: E0121 06:36:52.526097 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.627952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628022 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628042 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628054 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730954 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833292 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833344 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936755 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038941 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141716 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244270 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.346901 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.346961 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.346980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.347003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.347023 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449744 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449880 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449903 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.517895 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:48:10.574419777 +0000 UTC Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.526174 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.526200 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:53 crc kubenswrapper[4913]: E0121 06:36:53.526313 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:53 crc kubenswrapper[4913]: E0121 06:36:53.526443 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550656 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550685 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189580 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:54Z","lastTransitionTime":"2026-01-21T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.218463 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" podStartSLOduration=85.218433346 podStartE2EDuration="1m25.218433346s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.939701303 +0000 UTC m=+100.736060986" watchObservedRunningTime="2026-01-21 06:36:54.218433346 +0000 UTC m=+104.014793059" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.219662 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz"] Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.220190 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.222976 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.223885 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.223956 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.224696 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.251826 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.251789677 podStartE2EDuration="27.251789677s" podCreationTimestamp="2026-01-21 06:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:54.250892752 +0000 UTC m=+104.047252435" watchObservedRunningTime="2026-01-21 06:36:54.251789677 +0000 UTC m=+104.048149350" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.265560 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.265553703 podStartE2EDuration="50.265553703s" podCreationTimestamp="2026-01-21 06:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:54.265365308 +0000 UTC m=+104.061724991" watchObservedRunningTime="2026-01-21 06:36:54.265553703 +0000 UTC m=+104.061913376" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324717 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/297f7c0e-6df1-49e0-821e-20cb040cba1e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324760 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/297f7c0e-6df1-49e0-821e-20cb040cba1e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324810 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324842 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f7c0e-6df1-49e0-821e-20cb040cba1e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324892 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.425860 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/297f7c0e-6df1-49e0-821e-20cb040cba1e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.425948 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/297f7c0e-6df1-49e0-821e-20cb040cba1e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426019 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426054 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f7c0e-6df1-49e0-821e-20cb040cba1e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426119 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426219 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426238 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.427405 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/297f7c0e-6df1-49e0-821e-20cb040cba1e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.445163 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/297f7c0e-6df1-49e0-821e-20cb040cba1e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.464468 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f7c0e-6df1-49e0-821e-20cb040cba1e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.518879 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:34:33.535404226 +0000 UTC Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.518936 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.525525 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.525529 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:54 crc kubenswrapper[4913]: E0121 06:36:54.525803 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:54 crc kubenswrapper[4913]: E0121 06:36:54.525895 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.528972 4913 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.537555 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: W0121 06:36:54.560099 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod297f7c0e_6df1_49e0_821e_20cb040cba1e.slice/crio-1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6 WatchSource:0}: Error finding container 1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6: Status 404 returned error can't find the container with id 1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6 Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.111100 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" event={"ID":"297f7c0e-6df1-49e0-821e-20cb040cba1e","Type":"ContainerStarted","Data":"f6c5682ff2f3eedfd0ec6980dfa329f8983bdd65b6a9a0d4dc7edef54f1a2292"} Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.111190 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" event={"ID":"297f7c0e-6df1-49e0-821e-20cb040cba1e","Type":"ContainerStarted","Data":"1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6"} Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.133278 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" podStartSLOduration=86.133247818 podStartE2EDuration="1m26.133247818s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:55.132696394 +0000 UTC m=+104.929056137" watchObservedRunningTime="2026-01-21 06:36:55.133247818 +0000 UTC m=+104.929607501" Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.526051 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.526057 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:55 crc kubenswrapper[4913]: E0121 06:36:55.526244 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:55 crc kubenswrapper[4913]: E0121 06:36:55.526346 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:56 crc kubenswrapper[4913]: I0121 06:36:56.525964 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:56 crc kubenswrapper[4913]: I0121 06:36:56.526106 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:56 crc kubenswrapper[4913]: E0121 06:36:56.526251 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:56 crc kubenswrapper[4913]: E0121 06:36:56.526544 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:57 crc kubenswrapper[4913]: I0121 06:36:57.526296 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:57 crc kubenswrapper[4913]: E0121 06:36:57.526443 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:57 crc kubenswrapper[4913]: I0121 06:36:57.526306 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:57 crc kubenswrapper[4913]: E0121 06:36:57.526526 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:58 crc kubenswrapper[4913]: I0121 06:36:58.526078 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:58 crc kubenswrapper[4913]: I0121 06:36:58.526120 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:58 crc kubenswrapper[4913]: E0121 06:36:58.526253 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:58 crc kubenswrapper[4913]: E0121 06:36:58.526335 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:59 crc kubenswrapper[4913]: I0121 06:36:59.525651 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:59 crc kubenswrapper[4913]: I0121 06:36:59.525698 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:59 crc kubenswrapper[4913]: E0121 06:36:59.525834 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:59 crc kubenswrapper[4913]: E0121 06:36:59.526040 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:00 crc kubenswrapper[4913]: I0121 06:37:00.525864 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:00 crc kubenswrapper[4913]: I0121 06:37:00.525940 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:00 crc kubenswrapper[4913]: E0121 06:37:00.527091 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:00 crc kubenswrapper[4913]: E0121 06:37:00.527373 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:01 crc kubenswrapper[4913]: I0121 06:37:01.528945 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:37:01 crc kubenswrapper[4913]: E0121 06:37:01.529398 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:37:01 crc kubenswrapper[4913]: I0121 06:37:01.529761 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:01 crc kubenswrapper[4913]: E0121 06:37:01.529863 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:01 crc kubenswrapper[4913]: I0121 06:37:01.530060 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:01 crc kubenswrapper[4913]: E0121 06:37:01.530139 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.140547 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141504 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/0.log" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141706 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" exitCode=1 Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141760 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerDied","Data":"f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6"} Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141937 4913 scope.go:117] "RemoveContainer" containerID="9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.142431 4913 scope.go:117] "RemoveContainer" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" Jan 21 06:37:02 crc kubenswrapper[4913]: E0121 06:37:02.142807 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gn6lz_openshift-multus(b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf)\"" pod="openshift-multus/multus-gn6lz" podUID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.526189 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:02 crc kubenswrapper[4913]: E0121 06:37:02.526414 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.526816 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:02 crc kubenswrapper[4913]: E0121 06:37:02.527137 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:03 crc kubenswrapper[4913]: I0121 06:37:03.148104 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:37:03 crc kubenswrapper[4913]: I0121 06:37:03.525790 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:03 crc kubenswrapper[4913]: I0121 06:37:03.525797 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:03 crc kubenswrapper[4913]: E0121 06:37:03.525945 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:03 crc kubenswrapper[4913]: E0121 06:37:03.526061 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:04 crc kubenswrapper[4913]: I0121 06:37:04.526126 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:04 crc kubenswrapper[4913]: I0121 06:37:04.526180 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:04 crc kubenswrapper[4913]: E0121 06:37:04.526299 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:04 crc kubenswrapper[4913]: E0121 06:37:04.526768 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:05 crc kubenswrapper[4913]: I0121 06:37:05.526135 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:05 crc kubenswrapper[4913]: I0121 06:37:05.526156 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:05 crc kubenswrapper[4913]: E0121 06:37:05.527479 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:05 crc kubenswrapper[4913]: E0121 06:37:05.527630 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:06 crc kubenswrapper[4913]: I0121 06:37:06.525935 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:06 crc kubenswrapper[4913]: I0121 06:37:06.526003 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:06 crc kubenswrapper[4913]: E0121 06:37:06.526129 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:06 crc kubenswrapper[4913]: E0121 06:37:06.526252 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:07 crc kubenswrapper[4913]: I0121 06:37:07.525898 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:07 crc kubenswrapper[4913]: E0121 06:37:07.526090 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:07 crc kubenswrapper[4913]: I0121 06:37:07.525898 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:07 crc kubenswrapper[4913]: E0121 06:37:07.526427 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:08 crc kubenswrapper[4913]: I0121 06:37:08.526822 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:08 crc kubenswrapper[4913]: E0121 06:37:08.526998 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:08 crc kubenswrapper[4913]: I0121 06:37:08.527277 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:08 crc kubenswrapper[4913]: E0121 06:37:08.527378 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:09 crc kubenswrapper[4913]: I0121 06:37:09.525777 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:09 crc kubenswrapper[4913]: I0121 06:37:09.525837 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:09 crc kubenswrapper[4913]: E0121 06:37:09.525971 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:09 crc kubenswrapper[4913]: E0121 06:37:09.526237 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.510801 4913 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 06:37:10 crc kubenswrapper[4913]: I0121 06:37:10.525628 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:10 crc kubenswrapper[4913]: I0121 06:37:10.525740 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.528614 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.528581 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.631641 4913 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:37:11 crc kubenswrapper[4913]: I0121 06:37:11.526298 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:11 crc kubenswrapper[4913]: I0121 06:37:11.526394 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:11 crc kubenswrapper[4913]: E0121 06:37:11.526484 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:11 crc kubenswrapper[4913]: E0121 06:37:11.526649 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:12 crc kubenswrapper[4913]: I0121 06:37:12.525760 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:12 crc kubenswrapper[4913]: I0121 06:37:12.525776 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:12 crc kubenswrapper[4913]: E0121 06:37:12.526007 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:12 crc kubenswrapper[4913]: E0121 06:37:12.526504 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:12 crc kubenswrapper[4913]: I0121 06:37:12.526946 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.187151 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.189496 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.190159 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.228574 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podStartSLOduration=105.228554312 podStartE2EDuration="1m45.228554312s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:13.228300425 +0000 UTC m=+123.024660168" watchObservedRunningTime="2026-01-21 06:37:13.228554312 +0000 UTC m=+123.024913985" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.520346 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wfcsc"] Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.520495 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:13 crc kubenswrapper[4913]: E0121 06:37:13.520656 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.526428 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:13 crc kubenswrapper[4913]: E0121 06:37:13.526683 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.527139 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:13 crc kubenswrapper[4913]: E0121 06:37:13.527274 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:14 crc kubenswrapper[4913]: I0121 06:37:14.526854 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:14 crc kubenswrapper[4913]: E0121 06:37:14.527016 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526126 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526221 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526642 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.526675 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.526785 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526833 4913 scope.go:117] "RemoveContainer" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.526932 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.633174 4913 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:37:16 crc kubenswrapper[4913]: I0121 06:37:16.209198 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:37:16 crc kubenswrapper[4913]: I0121 06:37:16.209252 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70"} Jan 21 06:37:16 crc kubenswrapper[4913]: I0121 06:37:16.526506 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:16 crc kubenswrapper[4913]: E0121 06:37:16.526735 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:17 crc kubenswrapper[4913]: I0121 06:37:17.525974 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:17 crc kubenswrapper[4913]: I0121 06:37:17.526013 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:17 crc kubenswrapper[4913]: E0121 06:37:17.526182 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:17 crc kubenswrapper[4913]: I0121 06:37:17.526298 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:17 crc kubenswrapper[4913]: E0121 06:37:17.526435 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:17 crc kubenswrapper[4913]: E0121 06:37:17.526520 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:18 crc kubenswrapper[4913]: I0121 06:37:18.526386 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:18 crc kubenswrapper[4913]: E0121 06:37:18.526635 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:19 crc kubenswrapper[4913]: I0121 06:37:19.526429 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:19 crc kubenswrapper[4913]: I0121 06:37:19.526501 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:19 crc kubenswrapper[4913]: E0121 06:37:19.526720 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:19 crc kubenswrapper[4913]: E0121 06:37:19.527163 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:19 crc kubenswrapper[4913]: I0121 06:37:19.532663 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:19 crc kubenswrapper[4913]: E0121 06:37:19.532878 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:20 crc kubenswrapper[4913]: I0121 06:37:20.527020 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:20 crc kubenswrapper[4913]: E0121 06:37:20.528307 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.525721 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.525919 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.526153 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.529437 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.530805 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.530923 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.532508 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 06:37:22 crc kubenswrapper[4913]: I0121 06:37:22.525942 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:22 crc kubenswrapper[4913]: I0121 06:37:22.529148 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 06:37:22 crc kubenswrapper[4913]: I0121 06:37:22.529377 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.073533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.126652 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8kvjs"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.127358 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.130490 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.131629 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134118 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134126 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134250 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134568 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.135484 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.143903 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.166802 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.167429 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j966n"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.167872 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.168333 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.168820 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.169232 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.172691 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p4428"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.173585 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.174337 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.174825 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177045 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177099 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-config\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177126 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc8173-94d9-419d-9031-b0664a3f01e4-serving-cert\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177159 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177183 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177203 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-service-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177247 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177267 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177285 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtft\" (UniqueName: \"kubernetes.io/projected/19fc8173-94d9-419d-9031-b0664a3f01e4-kube-api-access-7gtft\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177306 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.179703 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.180630 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.181617 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182556 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182723 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182753 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182980 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.183248 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.185165 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.185389 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.188781 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5fgwx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.189351 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.194547 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.194750 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195201 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195349 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195548 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195687 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195861 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195918 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195579 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195548 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196257 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196370 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196384 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196470 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196494 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196262 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196783 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196908 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197156 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-k6jdd"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197259 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197317 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197471 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197487 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197561 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197576 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197756 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197757 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197833 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197955 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197973 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198048 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198155 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198246 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198257 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198617 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198773 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198811 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198923 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199135 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198439 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198476 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199280 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199293 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199439 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199482 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199550 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199726 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199894 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.200121 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.200381 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.201654 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.201669 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.213366 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f95sb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.214297 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.215752 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.216521 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.222930 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.226461 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.227286 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.229027 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k855s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.241051 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.241338 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.241475 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.242793 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243391 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.242799 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243759 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243973 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6plkm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243174 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244741 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244074 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244147 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.250420 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.250461 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.253471 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.254879 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.266233 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.266781 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.266814 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.267688 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.268218 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.268297 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.268600 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.269688 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.270115 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.270223 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.271617 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274093 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274122 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274182 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274366 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274484 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274602 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274697 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274733 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274842 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274875 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274948 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275012 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275033 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274708 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274842 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275183 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275273 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275373 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275466 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275548 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.276085 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.276550 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.279862 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280091 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-serving-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280127 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkl57\" (UniqueName: \"kubernetes.io/projected/8a371e85-6173-4802-976d-7ee68bc9afdc-kube-api-access-qkl57\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280151 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280177 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280191 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280217 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msv6f\" (UniqueName: \"kubernetes.io/projected/0ee14186-f787-47f1-8537-8cb2210ac28c-kube-api-access-msv6f\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280242 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a371e85-6173-4802-976d-7ee68bc9afdc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280264 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp4zs\" (UniqueName: \"kubernetes.io/projected/3dc93a0c-f8e0-4c76-a032-6d3e34878168-kube-api-access-dp4zs\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280287 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6m5w\" (UniqueName: \"kubernetes.io/projected/6b1d8220-775c-47a7-a772-00eacc2f957c-kube-api-access-j6m5w\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280309 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280332 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280331 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280454 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280479 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzm42\" (UniqueName: \"kubernetes.io/projected/70da4912-d52e-41a4-bf05-91f3f377d243-kube-api-access-zzm42\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280501 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c207fbab-618a-4c01-8450-cb7ffad0f50d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280526 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a371e85-6173-4802-976d-7ee68bc9afdc-serving-cert\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280547 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280569 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280615 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280652 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-encryption-config\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280673 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-console-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280697 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280722 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnbk\" (UniqueName: \"kubernetes.io/projected/208b512b-e1b8-4df9-9ec2-0f30bea24a20-kube-api-access-xjnbk\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280758 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-config\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280779 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-client\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280799 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-oauth-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280820 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5v4\" (UniqueName: \"kubernetes.io/projected/c207fbab-618a-4c01-8450-cb7ffad0f50d-kube-api-access-bd5v4\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280840 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280863 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280873 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49vtr"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280900 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280922 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280942 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280964 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9zw\" (UniqueName: \"kubernetes.io/projected/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-kube-api-access-5w9zw\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280985 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.289655 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.291947 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.292194 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.294023 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.294103 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.296915 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.297871 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.298226 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.298524 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299465 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.281004 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299839 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-client\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299877 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-service-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299914 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zwkm\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-kube-api-access-8zwkm\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299936 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4e8188-571a-4f41-8665-0565bf75f0d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300021 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6j8z\" (UniqueName: \"kubernetes.io/projected/08ac51dd-419d-4632-8a49-1972be301121-kube-api-access-f6j8z\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300045 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-serving-cert\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300077 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25cm\" (UniqueName: \"kubernetes.io/projected/57e1cc03-984e-4486-8393-f80bc1aa94af-kube-api-access-x25cm\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300097 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-config\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300114 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-serving-cert\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300130 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmvf\" (UniqueName: \"kubernetes.io/projected/026a670d-684f-4eb6-bda0-bd60294d3b95-kube-api-access-8bmvf\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300149 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300168 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-serving-cert\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300191 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300206 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-service-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300224 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-client\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300241 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300282 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-audit\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300306 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-oauth-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300326 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-auth-proxy-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300356 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300454 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300474 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtft\" (UniqueName: \"kubernetes.io/projected/19fc8173-94d9-419d-9031-b0664a3f01e4-kube-api-access-7gtft\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300492 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-policies\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300510 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-service-ca\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300529 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-trusted-ca\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300548 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/465393d8-5293-482f-8f3b-91578b3ba57b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300565 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1d8220-775c-47a7-a772-00eacc2f957c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300585 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1d8220-775c-47a7-a772-00eacc2f957c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300639 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300661 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300695 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-images\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300725 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300745 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4e8188-571a-4f41-8665-0565bf75f0d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300760 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48edf52b-d54b-4116-95d0-f8051704a4e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300804 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300828 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-config\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300863 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxdk\" (UniqueName: \"kubernetes.io/projected/465393d8-5293-482f-8f3b-91578b3ba57b-kube-api-access-smxdk\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300906 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300922 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-node-pullsecrets\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300945 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc8173-94d9-419d-9031-b0664a3f01e4-serving-cert\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300966 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-config\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300985 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-machine-approver-tls\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301000 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-encryption-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301017 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e8188-571a-4f41-8665-0565bf75f0d3-config\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301058 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301079 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70da4912-d52e-41a4-bf05-91f3f377d243-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301105 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208b512b-e1b8-4df9-9ec2-0f30bea24a20-serving-cert\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301124 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5sd\" (UniqueName: \"kubernetes.io/projected/c5567f5a-5084-4cc6-b654-f1190dcc0064-kube-api-access-cj5sd\") pod \"downloads-7954f5f757-k855s\" (UID: \"c5567f5a-5084-4cc6-b654-f1190dcc0064\") " pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301822 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.302408 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-config\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303318 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48edf52b-d54b-4116-95d0-f8051704a4e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303407 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-audit-dir\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303454 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303511 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303549 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70da4912-d52e-41a4-bf05-91f3f377d243-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303582 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303630 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-dir\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303657 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303691 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303904 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305313 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57e1cc03-984e-4486-8393-f80bc1aa94af-metrics-tls\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305361 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-trusted-ca-bundle\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305390 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305451 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305480 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305505 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-image-import-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305697 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.306451 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-service-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.306489 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.306934 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc8173-94d9-419d-9031-b0664a3f01e4-serving-cert\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.307370 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.313852 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.316020 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.316256 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.317384 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.317626 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.321053 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.321511 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6rtq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.322829 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.323120 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.323288 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.323938 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.324094 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.329355 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-cxnpf"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.329829 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.330048 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.330697 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.331198 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.332653 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.333530 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.333824 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.334762 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kq7d8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.334873 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.335336 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.335643 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8kvjs"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.337131 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.337687 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.338703 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.339573 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.340038 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j966n"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.341060 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.342052 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.342948 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.343923 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.345728 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bkrnj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.346661 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.347972 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.349283 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.349540 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k6jdd"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.350822 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f95sb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.354414 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.354459 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.358453 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.359257 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p4428"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.361194 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.362550 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.367931 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5fgwx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.369245 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6rtq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.369659 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.370630 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k855s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.371780 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.373006 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49vtr"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.374348 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.375486 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kq7d8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.376751 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.378046 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jlcqw"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.379299 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.379387 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kqctf"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.380423 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.381122 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.382618 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jlcqw"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.383721 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.384775 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6plkm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.385752 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.386783 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.388569 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.389277 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.389977 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.391576 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.392662 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.396324 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.399074 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.400911 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bkrnj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.402360 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.404400 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.405534 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5gjk2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406139 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406779 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-serving-cert\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406789 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5gjk2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406810 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406829 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406843 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-service-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406857 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-client\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-audit\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406903 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-oauth-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406922 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-auth-proxy-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406943 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-policies\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-trusted-ca\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406971 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/465393d8-5293-482f-8f3b-91578b3ba57b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406987 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1d8220-775c-47a7-a772-00eacc2f957c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407001 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1d8220-775c-47a7-a772-00eacc2f957c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407035 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407050 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-service-ca\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407070 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407097 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407127 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-images\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407143 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407160 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4e8188-571a-4f41-8665-0565bf75f0d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407176 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407193 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407210 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxdk\" (UniqueName: \"kubernetes.io/projected/465393d8-5293-482f-8f3b-91578b3ba57b-kube-api-access-smxdk\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407234 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48edf52b-d54b-4116-95d0-f8051704a4e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407258 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407280 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-node-pullsecrets\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407302 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-config\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407323 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-machine-approver-tls\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407344 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-encryption-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407365 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e8188-571a-4f41-8665-0565bf75f0d3-config\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407385 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70da4912-d52e-41a4-bf05-91f3f377d243-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407403 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208b512b-e1b8-4df9-9ec2-0f30bea24a20-serving-cert\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407422 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5sd\" (UniqueName: \"kubernetes.io/projected/c5567f5a-5084-4cc6-b654-f1190dcc0064-kube-api-access-cj5sd\") pod \"downloads-7954f5f757-k855s\" (UID: \"c5567f5a-5084-4cc6-b654-f1190dcc0064\") " pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407443 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gbh\" (UniqueName: \"kubernetes.io/projected/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-kube-api-access-25gbh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407464 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48edf52b-d54b-4116-95d0-f8051704a4e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407482 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95rdw\" (UniqueName: \"kubernetes.io/projected/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-kube-api-access-95rdw\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407502 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407527 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70da4912-d52e-41a4-bf05-91f3f377d243-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407545 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407564 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-dir\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407625 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407646 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-audit-dir\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407665 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57e1cc03-984e-4486-8393-f80bc1aa94af-metrics-tls\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407684 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407704 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-trusted-ca-bundle\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407729 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407753 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407898 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407921 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-image-import-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407928 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-service-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407944 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cdf7744-1629-46a4-b176-0fc75c149a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408066 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-serving-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408124 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408158 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkl57\" (UniqueName: \"kubernetes.io/projected/8a371e85-6173-4802-976d-7ee68bc9afdc-kube-api-access-qkl57\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408175 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msv6f\" (UniqueName: \"kubernetes.io/projected/0ee14186-f787-47f1-8537-8cb2210ac28c-kube-api-access-msv6f\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408194 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a371e85-6173-4802-976d-7ee68bc9afdc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408228 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp4zs\" (UniqueName: \"kubernetes.io/projected/3dc93a0c-f8e0-4c76-a032-6d3e34878168-kube-api-access-dp4zs\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408246 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6m5w\" (UniqueName: \"kubernetes.io/projected/6b1d8220-775c-47a7-a772-00eacc2f957c-kube-api-access-j6m5w\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408264 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408282 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408299 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408315 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzm42\" (UniqueName: \"kubernetes.io/projected/70da4912-d52e-41a4-bf05-91f3f377d243-kube-api-access-zzm42\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408332 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c207fbab-618a-4c01-8450-cb7ffad0f50d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408351 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a371e85-6173-4802-976d-7ee68bc9afdc-serving-cert\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408366 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408383 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408426 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-encryption-config\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408446 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-console-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408463 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408485 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnbk\" (UniqueName: \"kubernetes.io/projected/208b512b-e1b8-4df9-9ec2-0f30bea24a20-kube-api-access-xjnbk\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408501 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-config\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408517 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-client\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408534 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-oauth-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408552 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5v4\" (UniqueName: \"kubernetes.io/projected/c207fbab-618a-4c01-8450-cb7ffad0f50d-kube-api-access-bd5v4\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408561 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408568 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408633 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408664 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408706 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbjg\" (UniqueName: \"kubernetes.io/projected/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-kube-api-access-nvbjg\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408737 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408759 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408781 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408803 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvsvx\" (UniqueName: \"kubernetes.io/projected/6cdf7744-1629-46a4-b176-0fc75c149a95-kube-api-access-qvsvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408827 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9zw\" (UniqueName: \"kubernetes.io/projected/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-kube-api-access-5w9zw\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408852 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409001 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409053 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-client\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409082 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409107 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zwkm\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-kube-api-access-8zwkm\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409129 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4e8188-571a-4f41-8665-0565bf75f0d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409146 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6j8z\" (UniqueName: \"kubernetes.io/projected/08ac51dd-419d-4632-8a49-1972be301121-kube-api-access-f6j8z\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409163 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-serving-cert\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409182 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25cm\" (UniqueName: \"kubernetes.io/projected/57e1cc03-984e-4486-8393-f80bc1aa94af-kube-api-access-x25cm\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409221 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-config\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409237 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-serving-cert\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409253 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmvf\" (UniqueName: \"kubernetes.io/projected/026a670d-684f-4eb6-bda0-bd60294d3b95-kube-api-access-8bmvf\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409447 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409514 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-config\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409579 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-serving-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.410089 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.410857 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-client\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.411420 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.411565 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-audit\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.411912 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412148 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412503 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412633 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412897 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.413129 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.413405 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414001 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-config\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414045 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-images\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414374 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414667 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c207fbab-618a-4c01-8450-cb7ffad0f50d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414727 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a371e85-6173-4802-976d-7ee68bc9afdc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414957 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-auth-proxy-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415228 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415556 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-encryption-config\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415582 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415741 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1d8220-775c-47a7-a772-00eacc2f957c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416018 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416088 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-console-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416340 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-serving-cert\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416447 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-policies\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416681 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-service-ca\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417011 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-trusted-ca\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417170 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-oauth-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417263 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417375 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-node-pullsecrets\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417403 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-dir\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417485 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1d8220-775c-47a7-a772-00eacc2f957c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417579 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-config\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417950 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4e8188-571a-4f41-8665-0565bf75f0d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418019 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e8188-571a-4f41-8665-0565bf75f0d3-config\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418020 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418363 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-audit-dir\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418787 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70da4912-d52e-41a4-bf05-91f3f377d243-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419072 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-trusted-ca-bundle\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419416 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-image-import-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419494 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-encryption-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419496 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419608 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-machine-approver-tls\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.420169 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/465393d8-5293-482f-8f3b-91578b3ba57b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.420900 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.421186 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-client\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.421408 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-oauth-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.421628 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-client\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423329 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70da4912-d52e-41a4-bf05-91f3f377d243-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423365 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57e1cc03-984e-4486-8393-f80bc1aa94af-metrics-tls\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423381 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423543 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.424186 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.424958 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-serving-cert\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.425837 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208b512b-e1b8-4df9-9ec2-0f30bea24a20-serving-cert\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.429302 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.430417 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.440121 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-serving-cert\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.452486 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.474088 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.476907 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.489296 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.500823 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.509426 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.509918 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.509961 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvsvx\" (UniqueName: \"kubernetes.io/projected/6cdf7744-1629-46a4-b176-0fc75c149a95-kube-api-access-qvsvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510036 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510077 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95rdw\" (UniqueName: \"kubernetes.io/projected/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-kube-api-access-95rdw\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510163 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gbh\" (UniqueName: \"kubernetes.io/projected/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-kube-api-access-25gbh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510204 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510238 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cdf7744-1629-46a4-b176-0fc75c149a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510280 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510383 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510429 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbjg\" (UniqueName: \"kubernetes.io/projected/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-kube-api-access-nvbjg\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.529355 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.549078 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.569890 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.589549 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.599559 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48edf52b-d54b-4116-95d0-f8051704a4e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.616253 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.619558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48edf52b-d54b-4116-95d0-f8051704a4e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.629170 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.649859 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.656214 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a371e85-6173-4802-976d-7ee68bc9afdc-serving-cert\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.670228 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.690066 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.710168 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.729452 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.749451 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.754636 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.778457 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.790195 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.810416 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.821734 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.829454 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.850549 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.870371 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.890810 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.910298 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.950454 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.970462 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.990427 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.009657 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.030940 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.051383 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.070203 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.090886 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.110157 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.160461 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.161051 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.169894 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.190168 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.234650 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtft\" (UniqueName: \"kubernetes.io/projected/19fc8173-94d9-419d-9031-b0664a3f01e4-kube-api-access-7gtft\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.250166 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.254873 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.270373 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.290544 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.310581 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.328885 4913 request.go:700] Waited for 1.011016508s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.331381 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.346676 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cdf7744-1629-46a4-b176-0fc75c149a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.349894 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.366806 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.370419 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.390385 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.393950 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.410686 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.431392 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.450274 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.469921 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.493201 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.510838 4913 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.511006 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert podName:fdb0c051-dafc-4d42-8c28-d28c049eb0f7 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.010961174 +0000 UTC m=+136.807320897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert") pod "catalog-operator-68c6474976-cjqvz" (UID: "fdb0c051-dafc-4d42-8c28-d28c049eb0f7") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.511977 4913 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512098 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key podName:56b4a4e7-bb42-437e-8dce-70cbc917c7a8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.012075294 +0000 UTC m=+136.808435007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key") pod "service-ca-9c57cc56f-kq7d8" (UID: "56b4a4e7-bb42-437e-8dce-70cbc917c7a8") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512096 4913 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512268 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert podName:fdb0c051-dafc-4d42-8c28-d28c049eb0f7 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.012240178 +0000 UTC m=+136.808599891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert") pod "catalog-operator-68c6474976-cjqvz" (UID: "fdb0c051-dafc-4d42-8c28-d28c049eb0f7") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512122 4913 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512377 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle podName:56b4a4e7-bb42-437e-8dce-70cbc917c7a8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.012362461 +0000 UTC m=+136.808722174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle") pod "service-ca-9c57cc56f-kq7d8" (UID: "56b4a4e7-bb42-437e-8dce-70cbc917c7a8") : failed to sync configmap cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.512528 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.530201 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.549784 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.576015 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.589647 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.610116 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.629689 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.636468 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8kvjs"] Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.649666 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.659149 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:37:26 crc kubenswrapper[4913]: W0121 06:37:26.668657 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod527ef351_fb35_4f58_ae7b_d410c23496c6.slice/crio-67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc WatchSource:0}: Error finding container 67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc: Status 404 returned error can't find the container with id 67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.669653 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.690239 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.709578 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.729664 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.749865 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.769728 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.789829 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.810729 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.830699 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.851562 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.870469 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.889764 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.910351 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.929648 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.952431 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.969892 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.989781 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.010042 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032253 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032407 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032730 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032790 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.033898 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.039757 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.039905 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.040151 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.051137 4913 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.070259 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.090283 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.111323 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.131015 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.151764 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.170827 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.190187 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.210299 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.230126 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.264490 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp4zs\" (UniqueName: \"kubernetes.io/projected/3dc93a0c-f8e0-4c76-a032-6d3e34878168-kube-api-access-dp4zs\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.268032 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerStarted","Data":"e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.268185 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerStarted","Data":"67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.268915 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270276 4913 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bclp4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270322 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270508 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" event={"ID":"19fc8173-94d9-419d-9031-b0664a3f01e4","Type":"ContainerStarted","Data":"979142fe7086d20d808b547cc993bc6a74e3ad1c7b59d0514973ebca8333c021"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270615 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" event={"ID":"19fc8173-94d9-419d-9031-b0664a3f01e4","Type":"ContainerStarted","Data":"1ce9b8e54eddc8eb47fd9f20dd34ebd104adde7eba0ddb229943dc0f28101a43"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.292493 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzm42\" (UniqueName: \"kubernetes.io/projected/70da4912-d52e-41a4-bf05-91f3f377d243-kube-api-access-zzm42\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.303875 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6m5w\" (UniqueName: \"kubernetes.io/projected/6b1d8220-775c-47a7-a772-00eacc2f957c-kube-api-access-j6m5w\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.327316 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmvf\" (UniqueName: \"kubernetes.io/projected/026a670d-684f-4eb6-bda0-bd60294d3b95-kube-api-access-8bmvf\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.345084 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msv6f\" (UniqueName: \"kubernetes.io/projected/0ee14186-f787-47f1-8537-8cb2210ac28c-kube-api-access-msv6f\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.348859 4913 request.go:700] Waited for 1.937505382s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.370061 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkl57\" (UniqueName: \"kubernetes.io/projected/8a371e85-6173-4802-976d-7ee68bc9afdc-kube-api-access-qkl57\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.381095 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.407422 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnbk\" (UniqueName: \"kubernetes.io/projected/208b512b-e1b8-4df9-9ec2-0f30bea24a20-kube-api-access-xjnbk\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.426118 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zwkm\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-kube-api-access-8zwkm\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.435132 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.443378 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.454072 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4e8188-571a-4f41-8665-0565bf75f0d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.464876 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.465288 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6j8z\" (UniqueName: \"kubernetes.io/projected/08ac51dd-419d-4632-8a49-1972be301121-kube-api-access-f6j8z\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.483027 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.487779 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.489691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9zw\" (UniqueName: \"kubernetes.io/projected/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-kube-api-access-5w9zw\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.509847 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25cm\" (UniqueName: \"kubernetes.io/projected/57e1cc03-984e-4486-8393-f80bc1aa94af-kube-api-access-x25cm\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.524732 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxdk\" (UniqueName: \"kubernetes.io/projected/465393d8-5293-482f-8f3b-91578b3ba57b-kube-api-access-smxdk\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.549017 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.556843 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.563383 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.568139 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.593816 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.594647 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.601395 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.601917 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5v4\" (UniqueName: \"kubernetes.io/projected/c207fbab-618a-4c01-8450-cb7ffad0f50d-kube-api-access-bd5v4\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.612360 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.623300 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.633794 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5sd\" (UniqueName: \"kubernetes.io/projected/c5567f5a-5084-4cc6-b654-f1190dcc0064-kube-api-access-cj5sd\") pod \"downloads-7954f5f757-k855s\" (UID: \"c5567f5a-5084-4cc6-b654-f1190dcc0064\") " pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.659992 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvsvx\" (UniqueName: \"kubernetes.io/projected/6cdf7744-1629-46a4-b176-0fc75c149a95-kube-api-access-qvsvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.670347 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95rdw\" (UniqueName: \"kubernetes.io/projected/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-kube-api-access-95rdw\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.674114 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.694843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gbh\" (UniqueName: \"kubernetes.io/projected/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-kube-api-access-25gbh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.705148 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbjg\" (UniqueName: \"kubernetes.io/projected/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-kube-api-access-nvbjg\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.706082 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.729029 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741400 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-proxy-tls\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741452 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jwg\" (UniqueName: \"kubernetes.io/projected/f6ca48b3-019f-4481-b136-7d392b7073d8-kube-api-access-b2jwg\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741487 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6ca48b3-019f-4481-b136-7d392b7073d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741510 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/238fcbbb-ece2-4108-b4be-79ed872e541d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741576 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c174a67-522b-4d34-ba66-905ff560f206-metrics-tls\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741620 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741655 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0ca241-c740-42a3-8fd9-970024126d64-service-ca-bundle\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741701 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741717 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741743 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741791 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gbf\" (UniqueName: \"kubernetes.io/projected/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-kube-api-access-r9gbf\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741806 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-config\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741844 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741891 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-metrics-certs\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741930 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xr5\" (UniqueName: \"kubernetes.io/projected/6c174a67-522b-4d34-ba66-905ff560f206-kube-api-access-q6xr5\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741955 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-tmpfs\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741969 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4rt\" (UniqueName: \"kubernetes.io/projected/3f92f014-e88f-4e07-8f20-892e47c5de80-kube-api-access-kd4rt\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742005 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742019 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q94j\" (UniqueName: \"kubernetes.io/projected/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-kube-api-access-6q94j\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742034 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc94523-e315-4913-8ea8-ffa72274f5ab-serving-cert\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742056 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742115 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-srv-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742172 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742186 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742211 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742243 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c174a67-522b-4d34-ba66-905ff560f206-config-volume\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742276 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742315 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742330 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sh9l\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-kube-api-access-7sh9l\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742382 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742398 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnmv\" (UniqueName: \"kubernetes.io/projected/9ef3dfdf-4ae9-4baa-a830-e50b4942dd32-kube-api-access-gsnmv\") pod \"migrator-59844c95c7-5l5kl\" (UID: \"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742434 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmd4\" (UniqueName: \"kubernetes.io/projected/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-kube-api-access-fxmd4\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742451 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/238fcbbb-ece2-4108-b4be-79ed872e541d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742468 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742484 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f92f014-e88f-4e07-8f20-892e47c5de80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742526 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89ft\" (UniqueName: \"kubernetes.io/projected/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-kube-api-access-b89ft\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742540 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742623 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742648 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-images\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742671 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742687 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcpn\" (UniqueName: \"kubernetes.io/projected/2dc94523-e315-4913-8ea8-ffa72274f5ab-kube-api-access-tzcpn\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742727 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742777 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742844 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742877 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-stats-auth\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742960 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.743001 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ct2\" (UniqueName: \"kubernetes.io/projected/3e0ca241-c740-42a3-8fd9-970024126d64-kube-api-access-77ct2\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.743033 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6ca48b3-019f-4481-b136-7d392b7073d8-proxy-tls\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.744399 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-default-certificate\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.744434 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc94523-e315-4913-8ea8-ffa72274f5ab-config\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.744478 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-webhook-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: E0121 06:37:27.749346 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.249331111 +0000 UTC m=+138.045690784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.750775 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.774181 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.797071 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.806964 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849099 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tpw\" (UniqueName: \"kubernetes.io/projected/08d20980-2196-4efa-952e-defded465fb4-kube-api-access-c2tpw\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849339 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6ca48b3-019f-4481-b136-7d392b7073d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849366 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/238fcbbb-ece2-4108-b4be-79ed872e541d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849385 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43177e9e-03e6-4864-843a-c753a096648f-cert\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849411 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c174a67-522b-4d34-ba66-905ff560f206-metrics-tls\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849430 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0ca241-c740-42a3-8fd9-970024126d64-service-ca-bundle\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849466 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxz9\" (UniqueName: \"kubernetes.io/projected/3243dc09-2f27-4905-a1cc-08ff6d1e270f-kube-api-access-2rxz9\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849483 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849498 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849512 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849527 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-csi-data-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849544 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gbf\" (UniqueName: \"kubernetes.io/projected/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-kube-api-access-r9gbf\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849560 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-config\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849609 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849629 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-metrics-certs\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849653 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xr5\" (UniqueName: \"kubernetes.io/projected/6c174a67-522b-4d34-ba66-905ff560f206-kube-api-access-q6xr5\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849680 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-tmpfs\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849699 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4rt\" (UniqueName: \"kubernetes.io/projected/3f92f014-e88f-4e07-8f20-892e47c5de80-kube-api-access-kd4rt\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849716 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849732 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q94j\" (UniqueName: \"kubernetes.io/projected/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-kube-api-access-6q94j\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849747 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc94523-e315-4913-8ea8-ffa72274f5ab-serving-cert\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849762 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-srv-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849811 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849825 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849841 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849859 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c174a67-522b-4d34-ba66-905ff560f206-config-volume\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849874 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849888 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sh9l\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-kube-api-access-7sh9l\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849903 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-mountpoint-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849918 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849941 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-socket-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849961 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849983 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnmv\" (UniqueName: \"kubernetes.io/projected/9ef3dfdf-4ae9-4baa-a830-e50b4942dd32-kube-api-access-gsnmv\") pod \"migrator-59844c95c7-5l5kl\" (UID: \"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850005 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmd4\" (UniqueName: \"kubernetes.io/projected/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-kube-api-access-fxmd4\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850019 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/238fcbbb-ece2-4108-b4be-79ed872e541d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850036 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850052 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f92f014-e88f-4e07-8f20-892e47c5de80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850069 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89ft\" (UniqueName: \"kubernetes.io/projected/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-kube-api-access-b89ft\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850087 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850109 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-certs\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850162 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-node-bootstrap-token\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850182 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-images\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850199 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850214 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcpn\" (UniqueName: \"kubernetes.io/projected/2dc94523-e315-4913-8ea8-ffa72274f5ab-kube-api-access-tzcpn\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850231 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-registration-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850255 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850283 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850302 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-stats-auth\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850321 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv67c\" (UniqueName: \"kubernetes.io/projected/43177e9e-03e6-4864-843a-c753a096648f-kube-api-access-cv67c\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850346 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850381 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77ct2\" (UniqueName: \"kubernetes.io/projected/3e0ca241-c740-42a3-8fd9-970024126d64-kube-api-access-77ct2\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850399 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6ca48b3-019f-4481-b136-7d392b7073d8-proxy-tls\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850424 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-default-certificate\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850438 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc94523-e315-4913-8ea8-ffa72274f5ab-config\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850452 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-webhook-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850467 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-proxy-tls\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850482 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-plugins-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850497 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jwg\" (UniqueName: \"kubernetes.io/projected/f6ca48b3-019f-4481-b136-7d392b7073d8-kube-api-access-b2jwg\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: E0121 06:37:27.850852 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.350837619 +0000 UTC m=+138.147197292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.860474 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6ca48b3-019f-4481-b136-7d392b7073d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.861941 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.865417 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/238fcbbb-ece2-4108-b4be-79ed872e541d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.866017 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-tmpfs\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.870767 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.871432 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-config\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.875857 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.883813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.884367 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0ca241-c740-42a3-8fd9-970024126d64-service-ca-bundle\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.889249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.889713 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-images\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.893570 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c174a67-522b-4d34-ba66-905ff560f206-config-volume\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.894037 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.894274 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.895248 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc94523-e315-4913-8ea8-ffa72274f5ab-config\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.897494 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.898417 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.906800 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/238fcbbb-ece2-4108-b4be-79ed872e541d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.909506 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c174a67-522b-4d34-ba66-905ff560f206-metrics-tls\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.909990 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.910120 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.910531 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.912356 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-metrics-certs\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.912760 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.914374 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jwg\" (UniqueName: \"kubernetes.io/projected/f6ca48b3-019f-4481-b136-7d392b7073d8-kube-api-access-b2jwg\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.915308 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.916331 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc94523-e315-4913-8ea8-ffa72274f5ab-serving-cert\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.916637 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.917066 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-stats-auth\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.919371 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.919910 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f92f014-e88f-4e07-8f20-892e47c5de80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.920561 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p4428"] Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.922563 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-default-certificate\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934152 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-proxy-tls\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934382 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-webhook-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934768 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6ca48b3-019f-4481-b136-7d392b7073d8-proxy-tls\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934985 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.938254 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-srv-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.945382 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4rt\" (UniqueName: \"kubernetes.io/projected/3f92f014-e88f-4e07-8f20-892e47c5de80-kube-api-access-kd4rt\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.947628 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmd4\" (UniqueName: \"kubernetes.io/projected/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-kube-api-access-fxmd4\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.948048 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnmv\" (UniqueName: \"kubernetes.io/projected/9ef3dfdf-4ae9-4baa-a830-e50b4942dd32-kube-api-access-gsnmv\") pod \"migrator-59844c95c7-5l5kl\" (UID: \"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.948698 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xr5\" (UniqueName: \"kubernetes.io/projected/6c174a67-522b-4d34-ba66-905ff560f206-kube-api-access-q6xr5\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.950949 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tpw\" (UniqueName: \"kubernetes.io/projected/08d20980-2196-4efa-952e-defded465fb4-kube-api-access-c2tpw\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.950982 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43177e9e-03e6-4864-843a-c753a096648f-cert\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951012 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxz9\" (UniqueName: \"kubernetes.io/projected/3243dc09-2f27-4905-a1cc-08ff6d1e270f-kube-api-access-2rxz9\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951036 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-csi-data-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951106 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-mountpoint-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951122 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-socket-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951156 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-certs\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951176 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-node-bootstrap-token\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951199 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-registration-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951218 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951236 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv67c\" (UniqueName: \"kubernetes.io/projected/43177e9e-03e6-4864-843a-c753a096648f-kube-api-access-cv67c\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951270 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-plugins-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951535 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-plugins-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.952843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-csi-data-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.952876 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-mountpoint-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.952893 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-registration-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: E0121 06:37:27.953108 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.453095538 +0000 UTC m=+138.249455211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.953991 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-socket-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.954678 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.956327 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-node-bootstrap-token\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.957268 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-certs\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.961934 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43177e9e-03e6-4864-843a-c753a096648f-cert\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.977508 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.981925 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.987722 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.992053 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.000038 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.003893 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gbf\" (UniqueName: \"kubernetes.io/projected/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-kube-api-access-r9gbf\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.015620 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.028324 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.032117 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.051623 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.051971 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.551947986 +0000 UTC m=+138.348307649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.052289 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.052608 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.552585252 +0000 UTC m=+138.348944925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.079518 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.110540 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q94j\" (UniqueName: \"kubernetes.io/projected/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-kube-api-access-6q94j\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.113975 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sh9l\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-kube-api-access-7sh9l\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.126652 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:28 crc kubenswrapper[4913]: W0121 06:37:28.127748 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod026a670d_684f_4eb6_bda0_bd60294d3b95.slice/crio-d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509 WatchSource:0}: Error finding container d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509: Status 404 returned error can't find the container with id d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509 Jan 21 06:37:28 crc kubenswrapper[4913]: W0121 06:37:28.128431 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd2a9afe_21be_43e4_970d_03daff0713a1.slice/crio-e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab WatchSource:0}: Error finding container e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab: Status 404 returned error can't find the container with id e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.153548 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.153930 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.653911697 +0000 UTC m=+138.450271370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.162234 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcpn\" (UniqueName: \"kubernetes.io/projected/2dc94523-e315-4913-8ea8-ffa72274f5ab-kube-api-access-tzcpn\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.199953 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.201618 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89ft\" (UniqueName: \"kubernetes.io/projected/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-kube-api-access-b89ft\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.207357 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.219359 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.226014 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.226982 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ct2\" (UniqueName: \"kubernetes.io/projected/3e0ca241-c740-42a3-8fd9-970024126d64-kube-api-access-77ct2\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.231043 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.240572 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxz9\" (UniqueName: \"kubernetes.io/projected/3243dc09-2f27-4905-a1cc-08ff6d1e270f-kube-api-access-2rxz9\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.243935 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.248380 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.251300 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tpw\" (UniqueName: \"kubernetes.io/projected/08d20980-2196-4efa-952e-defded465fb4-kube-api-access-c2tpw\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.254515 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.254972 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.754960343 +0000 UTC m=+138.551320016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.264181 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.268840 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.279812 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.290999 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.304770 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv67c\" (UniqueName: \"kubernetes.io/projected/43177e9e-03e6-4864-843a-c753a096648f-kube-api-access-cv67c\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.313391 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerStarted","Data":"d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509"} Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.345605 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerStarted","Data":"e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab"} Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.367728 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.368448 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.368827 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.369219 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.369465 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.869435328 +0000 UTC m=+138.665795061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.369788 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.370180 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.870146587 +0000 UTC m=+138.666506260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.387010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" event={"ID":"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01","Type":"ContainerStarted","Data":"7c1e9e9e971d4384f838b4139329efb07faf6cb36bd96f30e288e77ef6ff29c2"} Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.405668 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.464709 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.477404 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.477779 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.977763919 +0000 UTC m=+138.774123592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.582872 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.584129 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.084116317 +0000 UTC m=+138.880475990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.684844 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.685144 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.185128692 +0000 UTC m=+138.981488365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.785553 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.786355 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.786852 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.286839707 +0000 UTC m=+139.083199380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.798114 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.808019 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6plkm"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.817468 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k6jdd"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.820369 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.887152 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.887433 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.387419751 +0000 UTC m=+139.183779424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.988091 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.988387 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.488376335 +0000 UTC m=+139.284736008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.042196 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" podStartSLOduration=120.04218041 podStartE2EDuration="2m0.04218041s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:29.040824674 +0000 UTC m=+138.837184347" watchObservedRunningTime="2026-01-21 06:37:29.04218041 +0000 UTC m=+138.838540083" Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.089065 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.089519 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.589499483 +0000 UTC m=+139.385859156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.178189 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kq7d8"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.190182 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.190434 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.690423456 +0000 UTC m=+139.486783129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.228735 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.238178 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.267288 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.270926 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.277430 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j966n"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.278069 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.292005 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.292302 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.792288575 +0000 UTC m=+139.588648248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.316840 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f95sb"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.324207 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.339320 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bkrnj"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.355944 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.359862 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.368274 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5fgwx"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.370772 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.375937 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.378073 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.380282 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.382565 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k855s"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.393439 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.393715 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.89370334 +0000 UTC m=+139.690063013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.503470 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.503626 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.003580743 +0000 UTC m=+139.799940416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.503822 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.504121 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.004106047 +0000 UTC m=+139.800465720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.530775 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.604794 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.604985 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.104957548 +0000 UTC m=+139.901317231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.605075 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.605467 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.105454242 +0000 UTC m=+139.901813925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.706389 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.706549 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.206517188 +0000 UTC m=+140.002876871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.706779 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.707171 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.207155705 +0000 UTC m=+140.003515388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.733931 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" podStartSLOduration=120.733901559 podStartE2EDuration="2m0.733901559s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:29.731957797 +0000 UTC m=+139.528317480" watchObservedRunningTime="2026-01-21 06:37:29.733901559 +0000 UTC m=+139.530261262" Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.808546 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.808744 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.308719436 +0000 UTC m=+140.105079109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.808969 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.809268 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.30925977 +0000 UTC m=+140.105619443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.910188 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.910382 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.410344107 +0000 UTC m=+140.206703810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.910648 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.911021 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.411006485 +0000 UTC m=+140.207366148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.011846 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.012152 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.512103473 +0000 UTC m=+140.308463176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.012624 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.013069 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.513045868 +0000 UTC m=+140.309405581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.043077 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a4e8188_571a_4f41_8665_0565bf75f0d3.slice/crio-20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522 WatchSource:0}: Error finding container 20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522: Status 404 returned error can't find the container with id 20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.056671 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70da4912_d52e_41a4_bf05_91f3f377d243.slice/crio-35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9 WatchSource:0}: Error finding container 35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9: Status 404 returned error can't find the container with id 35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.075025 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ebe95b_4e82_49aa_8693_52c0998ec7de.slice/crio-ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea WatchSource:0}: Error finding container ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea: Status 404 returned error can't find the container with id ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.076144 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee14186_f787_47f1_8537_8cb2210ac28c.slice/crio-8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4 WatchSource:0}: Error finding container 8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4: Status 404 returned error can't find the container with id 8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.078463 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208b512b_e1b8_4df9_9ec2_0f30bea24a20.slice/crio-0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea WatchSource:0}: Error finding container 0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea: Status 404 returned error can't find the container with id 0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.084421 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e1cc03_984e_4486_8393_f80bc1aa94af.slice/crio-436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3 WatchSource:0}: Error finding container 436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3: Status 404 returned error can't find the container with id 436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.092883 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c174a67_522b_4d34_ba66_905ff560f206.slice/crio-71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1 WatchSource:0}: Error finding container 71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1: Status 404 returned error can't find the container with id 71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.093713 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ca48b3_019f_4481_b136_7d392b7073d8.slice/crio-3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7 WatchSource:0}: Error finding container 3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7: Status 404 returned error can't find the container with id 3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.094172 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e5f1ef_7cb7_4909_beaf_cd352767d0ca.slice/crio-a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf WatchSource:0}: Error finding container a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf: Status 404 returned error can't find the container with id a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.095195 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cdf7744_1629_46a4_b176_0fc75c149a95.slice/crio-5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035 WatchSource:0}: Error finding container 5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035: Status 404 returned error can't find the container with id 5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.096421 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc207fbab_618a_4c01_8450_cb7ffad0f50d.slice/crio-27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa WatchSource:0}: Error finding container 27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa: Status 404 returned error can't find the container with id 27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.113653 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.113905 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.613890769 +0000 UTC m=+140.410250432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.157166 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f5b544_ffc3_43fb_b9b4_c319cffd63c5.slice/crio-efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023 WatchSource:0}: Error finding container efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023: Status 404 returned error can't find the container with id efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.159981 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5567f5a_5084_4cc6_b654_f1190dcc0064.slice/crio-d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4 WatchSource:0}: Error finding container d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4: Status 404 returned error can't find the container with id d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4 Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.216417 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.217102 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.717076852 +0000 UTC m=+140.513436535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.317005 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.317345 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.817325368 +0000 UTC m=+140.613685041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.317502 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.317773 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.817762319 +0000 UTC m=+140.614121992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.387523 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7"] Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.413669 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa7fe7b_4999_4a9b_a945_cc404c5467f9.slice/crio-dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f WatchSource:0}: Error finding container dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f: Status 404 returned error can't find the container with id dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.413856 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerStarted","Data":"ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.417626 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" event={"ID":"3dc93a0c-f8e0-4c76-a032-6d3e34878168","Type":"ContainerStarted","Data":"b70c29043f47645e7a927707cc5e0659ec72ac62e6e82635ef7f49a9182b6bba"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.417946 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.418899 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.918882818 +0000 UTC m=+140.715242491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.422602 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.424178 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cxnpf" event={"ID":"3e0ca241-c740-42a3-8fd9-970024126d64","Type":"ContainerStarted","Data":"b4f7cf104bd42ce2334382e66b471f7618fde2b6a6aa02252b6a1c874aaa9ac8"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.426897 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" event={"ID":"6cdf7744-1629-46a4-b176-0fc75c149a95","Type":"ContainerStarted","Data":"5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.428096 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" event={"ID":"6a4e8188-571a-4f41-8665-0565bf75f0d3","Type":"ContainerStarted","Data":"20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.429167 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" event={"ID":"57e1cc03-984e-4486-8393-f80bc1aa94af","Type":"ContainerStarted","Data":"436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.430127 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" event={"ID":"70da4912-d52e-41a4-bf05-91f3f377d243","Type":"ContainerStarted","Data":"35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.437660 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" event={"ID":"56b4a4e7-bb42-437e-8dce-70cbc917c7a8","Type":"ContainerStarted","Data":"3e68c3a1be9a8a28795d631d8936218b69cc2a5c00443197a763fed4e1c11829"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.448278 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" event={"ID":"f6ca48b3-019f-4481-b136-7d392b7073d8","Type":"ContainerStarted","Data":"3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.453135 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" event={"ID":"c207fbab-618a-4c01-8450-cb7ffad0f50d","Type":"ContainerStarted","Data":"27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.454906 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" event={"ID":"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53","Type":"ContainerStarted","Data":"ab2bc05b5e9281fe6e3501e96f755e08c0a90bf042018e5cf4910a8c57b87e25"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.455901 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" event={"ID":"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32","Type":"ContainerStarted","Data":"f7cc373b3ce6921173e1af01fe13eed02614767006785181d331c2b9cac72ba7"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.456795 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" event={"ID":"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0","Type":"ContainerStarted","Data":"5eccfbfcf9ee02278f4a4fde6f6825fe9160fb0648fb92a38fe89f35284a5a21"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.457609 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" event={"ID":"8a371e85-6173-4802-976d-7ee68bc9afdc","Type":"ContainerStarted","Data":"556fdf168eb0c80a931e188b0e37019ad0ee8045096315e083169e23ff52819e"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.458426 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" event={"ID":"48edf52b-d54b-4116-95d0-f8051704a4e3","Type":"ContainerStarted","Data":"c7d0244c9d6a10d64b62347f4e56e737366ec3d8a812987d3da50ab0458bd0ba"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.459161 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerStarted","Data":"cb3977af5e68023242bf0ddc97686fb8058507b9de52582bb7d762e6b09403d5"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.459848 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k6jdd" event={"ID":"08ac51dd-419d-4632-8a49-1972be301121","Type":"ContainerStarted","Data":"1711b26d6a73224ca5472a0d210d668d23572f6048bb46f3e8dbd5ec647c64b5"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.462059 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k855s" event={"ID":"c5567f5a-5084-4cc6-b654-f1190dcc0064","Type":"ContainerStarted","Data":"d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.462792 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" event={"ID":"19f5b544-ffc3-43fb-b9b4-c319cffd63c5","Type":"ContainerStarted","Data":"efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.466453 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bkrnj" event={"ID":"6c174a67-522b-4d34-ba66-905ff560f206","Type":"ContainerStarted","Data":"71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.467709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" event={"ID":"0ee14186-f787-47f1-8537-8cb2210ac28c","Type":"ContainerStarted","Data":"8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.468431 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" event={"ID":"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca","Type":"ContainerStarted","Data":"a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.469008 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" event={"ID":"fdb0c051-dafc-4d42-8c28-d28c049eb0f7","Type":"ContainerStarted","Data":"5c6f98c736472dc2d589a42a06b2aca07cf0ba69da97b9d081ad7bd7c2ec482b"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.469733 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j966n" event={"ID":"208b512b-e1b8-4df9-9ec2-0f30bea24a20","Type":"ContainerStarted","Data":"0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.470456 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" event={"ID":"6b1d8220-775c-47a7-a772-00eacc2f957c","Type":"ContainerStarted","Data":"2a114e2897754ab94efde60a2573c389c5c0331b728f55058f0ad97ff789ebce"} Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.479353 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1cbeb4c_0b76_4c39_ab17_18085750e8c2.slice/crio-e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a WatchSource:0}: Error finding container e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a: Status 404 returned error can't find the container with id e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.525445 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.525893 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.025872383 +0000 UTC m=+140.822232156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.626628 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.627197 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.127184406 +0000 UTC m=+140.923544079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.630776 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jlcqw"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.672925 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5gjk2"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.676318 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6rtq"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.729341 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.730298 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.230283918 +0000 UTC m=+141.026643591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.730810 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.733665 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.734148 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.741620 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49vtr"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.830639 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.830846 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.330814381 +0000 UTC m=+141.127174044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.830934 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.831685 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.331676843 +0000 UTC m=+141.128036516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.933082 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.933497 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.43348227 +0000 UTC m=+141.229841943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.035005 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.035776 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.535764089 +0000 UTC m=+141.332123762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.136537 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.136721 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.636693362 +0000 UTC m=+141.433053035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.137033 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.137432 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.637424492 +0000 UTC m=+141.433784165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.241097 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.241691 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.741673254 +0000 UTC m=+141.538032927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.243095 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.243846 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.743835672 +0000 UTC m=+141.540195345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.346043 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.347648 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.847631702 +0000 UTC m=+141.643991375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.449076 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.449418 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.949402168 +0000 UTC m=+141.745761841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.475377 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" event={"ID":"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32","Type":"ContainerStarted","Data":"4940ca01f0e9f3f5f5928344156090ac97b6ca4fab74ef91cbc6d10f5c5bf441"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.481476 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" event={"ID":"3f92f014-e88f-4e07-8f20-892e47c5de80","Type":"ContainerStarted","Data":"0f33a02ba3aa1873a3a595d4b5ec68e54bbf6bb50d0d8f13d3798bcc0989ca6d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.485168 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" event={"ID":"5fa7fe7b-4999-4a9b-a945-cc404c5467f9","Type":"ContainerStarted","Data":"3a551da773d0f1c2a076d37775d5dc550ef973cd76947622560b8c9acf60134d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.485205 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" event={"ID":"5fa7fe7b-4999-4a9b-a945-cc404c5467f9","Type":"ContainerStarted","Data":"dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.486552 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.489129 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" event={"ID":"c207fbab-618a-4c01-8450-cb7ffad0f50d","Type":"ContainerStarted","Data":"5478fc68d4b2ffb50974b99c69ac9c15eed505a05bb5171369ef4c48ec7b6f0c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.491052 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerStarted","Data":"8af3a3fe3dfde5bcb547f586da849f79867bd334c534af99562358101ad4a451"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.495552 4913 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vh2n7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.495601 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" podUID="5fa7fe7b-4999-4a9b-a945-cc404c5467f9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.497338 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" event={"ID":"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0","Type":"ContainerStarted","Data":"535a934516ef46b213295aa9268d017b6b175854880d7dee74d5133cb36ec92d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.499568 4913 generic.go:334] "Generic (PLEG): container finished" podID="8a371e85-6173-4802-976d-7ee68bc9afdc" containerID="6d9b217161f73edab65c8877eff569e0764fb365375ecd808afcce9e39dad3cf" exitCode=0 Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.499693 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" event={"ID":"8a371e85-6173-4802-976d-7ee68bc9afdc","Type":"ContainerDied","Data":"6d9b217161f73edab65c8877eff569e0764fb365375ecd808afcce9e39dad3cf"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.502557 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" event={"ID":"48edf52b-d54b-4116-95d0-f8051704a4e3","Type":"ContainerStarted","Data":"6e53fce28553c1fe73b7eb72adf3c7ccae8b3748a0e1b6be34d99c16625b8fb2"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.506878 4913 generic.go:334] "Generic (PLEG): container finished" podID="026a670d-684f-4eb6-bda0-bd60294d3b95" containerID="3f15b55768d29cbd0cd6e4e57fa2a28cfc936a5621df640755559d170798146c" exitCode=0 Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.507150 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerDied","Data":"3f15b55768d29cbd0cd6e4e57fa2a28cfc936a5621df640755559d170798146c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.507618 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" podStartSLOduration=122.50757341 podStartE2EDuration="2m2.50757341s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.505056322 +0000 UTC m=+141.301415995" watchObservedRunningTime="2026-01-21 06:37:31.50757341 +0000 UTC m=+141.303933073" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.512232 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cxnpf" event={"ID":"3e0ca241-c740-42a3-8fd9-970024126d64","Type":"ContainerStarted","Data":"6e48f4877d509b479160f57948aafc4e6c9110080d36e5b323e5f534f7bd06f1"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.515017 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" event={"ID":"56b4a4e7-bb42-437e-8dce-70cbc917c7a8","Type":"ContainerStarted","Data":"1351e3ba3a95a880f41cd7b45d112f72d3ee4396d850c5535fb541ec9cbc0a52"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.520756 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" event={"ID":"57e1cc03-984e-4486-8393-f80bc1aa94af","Type":"ContainerStarted","Data":"6d42f253ccc247388630850dc41781413f5e37fa53158d78d29109889b611ad3"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.530746 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" event={"ID":"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01","Type":"ContainerStarted","Data":"78d1e4adea34683cbee4433143a439394251aa02e7d283454de1ee49c197e873"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.535278 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" event={"ID":"465393d8-5293-482f-8f3b-91578b3ba57b","Type":"ContainerStarted","Data":"ff8925cdf094abcb97fa829780402ee78be44b7db21ff0ab7ba12b6da6c7c207"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.543127 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" event={"ID":"b1cbeb4c-0b76-4c39-ab17-18085750e8c2","Type":"ContainerStarted","Data":"e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.551316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.554065 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.052878658 +0000 UTC m=+141.849238331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.554172 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"a6af533befa224313080e1746568217d9ccfa07bdf1610e50e4ff9bb4f25d2d0"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.559135 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.563716 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" event={"ID":"70da4912-d52e-41a4-bf05-91f3f377d243","Type":"ContainerStarted","Data":"5feb0b07a4290e83834f632db08e39de35311551b17385edb36bae2ff82e9081"} Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.565939 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.065925127 +0000 UTC m=+141.862284800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.572079 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-cxnpf" podStartSLOduration=122.57205777 podStartE2EDuration="2m2.57205777s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.568396653 +0000 UTC m=+141.364756346" watchObservedRunningTime="2026-01-21 06:37:31.57205777 +0000 UTC m=+141.368417443" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.621416 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" podStartSLOduration=122.621399717 podStartE2EDuration="2m2.621399717s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.620056601 +0000 UTC m=+141.416416274" watchObservedRunningTime="2026-01-21 06:37:31.621399717 +0000 UTC m=+141.417759390" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.624149 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerStarted","Data":"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.625166 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.648019 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" event={"ID":"e70bbe19-3e5b-4629-b9bf-3c6fc8072836","Type":"ContainerStarted","Data":"6c7fa0c796a72c63c61994649ddd8916637ad7783de1094fb5742fda5843f240"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.648313 4913 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b6p62 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.648367 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.659817 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" podStartSLOduration=122.659802492 podStartE2EDuration="2m2.659802492s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.658242261 +0000 UTC m=+141.454601934" watchObservedRunningTime="2026-01-21 06:37:31.659802492 +0000 UTC m=+141.456162165" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.660647 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.661873 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.161857317 +0000 UTC m=+141.958216990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.715553 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bkrnj" event={"ID":"6c174a67-522b-4d34-ba66-905ff560f206","Type":"ContainerStarted","Data":"ab26a0d9b5d8c83ce276e743bb692cc8d05157e4c92550c01def6944b8b85c37"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.735763 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k6jdd" event={"ID":"08ac51dd-419d-4632-8a49-1972be301121","Type":"ContainerStarted","Data":"a9f412d7a4c7905dad1e375b9d243183aac250f3169ab9a45532c276b5d6635c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.762389 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.762694 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.262681667 +0000 UTC m=+142.059041340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.765139 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" podStartSLOduration=122.765114492 podStartE2EDuration="2m2.765114492s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.695941757 +0000 UTC m=+141.492301430" watchObservedRunningTime="2026-01-21 06:37:31.765114492 +0000 UTC m=+141.561474165" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.765328 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-k6jdd" podStartSLOduration=122.765323568 podStartE2EDuration="2m2.765323568s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.764523737 +0000 UTC m=+141.560883420" watchObservedRunningTime="2026-01-21 06:37:31.765323568 +0000 UTC m=+141.561683241" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.781461 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" event={"ID":"fdb0c051-dafc-4d42-8c28-d28c049eb0f7","Type":"ContainerStarted","Data":"97e0ea62b166738e080bdd54067240747cef6f3b3ddedf944521f4558b0d795c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.782341 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.789326 4913 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cjqvz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.789378 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" podUID="fdb0c051-dafc-4d42-8c28-d28c049eb0f7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.791562 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k855s" event={"ID":"c5567f5a-5084-4cc6-b654-f1190dcc0064","Type":"ContainerStarted","Data":"6b09aeeb81a9f08eb8da4b70dfab1f0704f83466e82b190a59516cab9ae509cf"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.792522 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.818782 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.818838 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.819065 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerStarted","Data":"a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.819204 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" podStartSLOduration=122.819180685 podStartE2EDuration="2m2.819180685s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.812128097 +0000 UTC m=+141.608487770" watchObservedRunningTime="2026-01-21 06:37:31.819180685 +0000 UTC m=+141.615540358" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.819823 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.826744 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" event={"ID":"2dc94523-e315-4913-8ea8-ffa72274f5ab","Type":"ContainerStarted","Data":"42201769803aa0d96157ebef6e6d45ffa94fd026bf33a004dd48e731774c945b"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.826776 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" event={"ID":"2dc94523-e315-4913-8ea8-ffa72274f5ab","Type":"ContainerStarted","Data":"1c14de190265055f82a69f5a8751fa6dffc9f1fa70d05c6d9aeb4e53ef216116"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.830055 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" event={"ID":"6cdf7744-1629-46a4-b176-0fc75c149a95","Type":"ContainerStarted","Data":"ead66f604f28bf227ff5b5218886f29ecc158249c059f996e3036f13b05dcde3"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.830522 4913 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tbgjj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.830558 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.834240 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k855s" podStartSLOduration=122.834229926 podStartE2EDuration="2m2.834229926s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.831852043 +0000 UTC m=+141.628211716" watchObservedRunningTime="2026-01-21 06:37:31.834229926 +0000 UTC m=+141.630589599" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.839737 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5gjk2" event={"ID":"43177e9e-03e6-4864-843a-c753a096648f","Type":"ContainerStarted","Data":"14cbdff2dc7668af3ab2dd0a78497e678ba114f16da6b252dd8136c52470352d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.847783 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" podStartSLOduration=122.847773068 podStartE2EDuration="2m2.847773068s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.847366747 +0000 UTC m=+141.643726420" watchObservedRunningTime="2026-01-21 06:37:31.847773068 +0000 UTC m=+141.644132741" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.848257 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" event={"ID":"3dc93a0c-f8e0-4c76-a032-6d3e34878168","Type":"ContainerStarted","Data":"0cdac816e352c9dd877b1acf1303f083f700337106d006450f7551e6d6ec9f52"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.849560 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" event={"ID":"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca","Type":"ContainerStarted","Data":"943ba7ad8e35481e8cfadba5c3d47bdfd8b4a22d56679068442de4499747eea4"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.850974 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerStarted","Data":"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.851446 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.853439 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" event={"ID":"6b1d8220-775c-47a7-a772-00eacc2f957c","Type":"ContainerStarted","Data":"c15e6ffe791c04b9dd5aeac8c356017e9c089ae3e068b0246741c046fd233d5a"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.855822 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" event={"ID":"f6ca48b3-019f-4481-b136-7d392b7073d8","Type":"ContainerStarted","Data":"1af0f87ecb503f03ba67e76f79f85af671500e787143c9d2de7a6f578e601f3f"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.856155 4913 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qjrx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.856183 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.862896 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.864176 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.364160485 +0000 UTC m=+142.160520158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.875176 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" podStartSLOduration=122.875162199 podStartE2EDuration="2m2.875162199s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.861675899 +0000 UTC m=+141.658035572" watchObservedRunningTime="2026-01-21 06:37:31.875162199 +0000 UTC m=+141.671521872" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.876246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j966n" event={"ID":"208b512b-e1b8-4df9-9ec2-0f30bea24a20","Type":"ContainerStarted","Data":"c529394340b2aafb749917221ef09774bc2626294ce56ea6a16c072658905eb4"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.876551 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" podStartSLOduration=122.876545676 podStartE2EDuration="2m2.876545676s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.875739924 +0000 UTC m=+141.672099597" watchObservedRunningTime="2026-01-21 06:37:31.876545676 +0000 UTC m=+141.672905349" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.876848 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.883966 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" event={"ID":"6a4e8188-571a-4f41-8665-0565bf75f0d3","Type":"ContainerStarted","Data":"9aebd14d4dbaa35a1cb5311c634b85c28c58d3bd1a43070485c5f1caba588313"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.884540 4913 patch_prober.go:28] interesting pod/console-operator-58897d9998-j966n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.884571 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j966n" podUID="208b512b-e1b8-4df9-9ec2-0f30bea24a20" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.905319 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" event={"ID":"19f5b544-ffc3-43fb-b9b4-c319cffd63c5","Type":"ContainerStarted","Data":"128f1f950f4cd364364c5e1096100c21869bda7d7c62506adc50637ff1874b83"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.905770 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.908113 4913 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-98kwn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.911147 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" podUID="19f5b544-ffc3-43fb-b9b4-c319cffd63c5" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.912366 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" podStartSLOduration=122.912342191 podStartE2EDuration="2m2.912342191s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.893337984 +0000 UTC m=+141.689697657" watchObservedRunningTime="2026-01-21 06:37:31.912342191 +0000 UTC m=+141.708701864" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.913020 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" podStartSLOduration=122.913013039 podStartE2EDuration="2m2.913013039s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.908416026 +0000 UTC m=+141.704775699" watchObservedRunningTime="2026-01-21 06:37:31.913013039 +0000 UTC m=+141.709372712" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.914010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" event={"ID":"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53","Type":"ContainerStarted","Data":"b77a6978b444a101442820a2957bd55881d6551d6fbd1b9392bc3aedc3033a78"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.921460 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kqctf" event={"ID":"08d20980-2196-4efa-952e-defded465fb4","Type":"ContainerStarted","Data":"a97f5604cb9263066b45a4b81290fb45943d0453d76ce7945059e6e0402bfd66"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.921493 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kqctf" event={"ID":"08d20980-2196-4efa-952e-defded465fb4","Type":"ContainerStarted","Data":"81d3773cd493ed26d7e4d160f443c300fba060350d54806c455daa0987a0f26c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.924112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" event={"ID":"238fcbbb-ece2-4108-b4be-79ed872e541d","Type":"ContainerStarted","Data":"b6e85abfb4417f892d9c8bb170a26b7a54c694d381352f853bac71caddcfca8a"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.924153 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" event={"ID":"238fcbbb-ece2-4108-b4be-79ed872e541d","Type":"ContainerStarted","Data":"45797279a554551aca6c49898c336c017d121c2b2772bf3ded612f032be7c08f"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.932646 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5gjk2" podStartSLOduration=6.932630883 podStartE2EDuration="6.932630883s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.929790967 +0000 UTC m=+141.726150640" watchObservedRunningTime="2026-01-21 06:37:31.932630883 +0000 UTC m=+141.728990556" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.956462 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" podStartSLOduration=122.956444708 podStartE2EDuration="2m2.956444708s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.953635953 +0000 UTC m=+141.749995626" watchObservedRunningTime="2026-01-21 06:37:31.956444708 +0000 UTC m=+141.752804381" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.965421 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.968389 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.468374286 +0000 UTC m=+142.264734079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.978479 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podStartSLOduration=122.978458215 podStartE2EDuration="2m2.978458215s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.977640974 +0000 UTC m=+141.774000667" watchObservedRunningTime="2026-01-21 06:37:31.978458215 +0000 UTC m=+141.774817888" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.014760 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" podStartSLOduration=123.014721523 podStartE2EDuration="2m3.014721523s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.012109163 +0000 UTC m=+141.808468836" watchObservedRunningTime="2026-01-21 06:37:32.014721523 +0000 UTC m=+141.811081196" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.059766 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" podStartSLOduration=123.059737704 podStartE2EDuration="2m3.059737704s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.035309952 +0000 UTC m=+141.831669625" watchObservedRunningTime="2026-01-21 06:37:32.059737704 +0000 UTC m=+141.856097367" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.059915 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kqctf" podStartSLOduration=7.059910799 podStartE2EDuration="7.059910799s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.057654849 +0000 UTC m=+141.854014522" watchObservedRunningTime="2026-01-21 06:37:32.059910799 +0000 UTC m=+141.856270472" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.074062 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.074320 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.574306003 +0000 UTC m=+142.370665676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.093410 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" podStartSLOduration=123.093392152 podStartE2EDuration="2m3.093392152s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.075183806 +0000 UTC m=+141.871543479" watchObservedRunningTime="2026-01-21 06:37:32.093392152 +0000 UTC m=+141.889751825" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.110165 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-j966n" podStartSLOduration=123.110136889 podStartE2EDuration="2m3.110136889s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.108218148 +0000 UTC m=+141.904577821" watchObservedRunningTime="2026-01-21 06:37:32.110136889 +0000 UTC m=+141.906496562" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.176250 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.176699 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.676687525 +0000 UTC m=+142.473047198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.273938 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.277466 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.277882 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.777855874 +0000 UTC m=+142.574215567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.300847 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:32 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:32 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:32 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.301108 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.379392 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.379784 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.879773364 +0000 UTC m=+142.676133037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.481306 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.481611 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.981575431 +0000 UTC m=+142.777935094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.481667 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.482015 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.982002162 +0000 UTC m=+142.778361825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.583029 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.583499 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.083484881 +0000 UTC m=+142.879844554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.684347 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.684672 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.184661331 +0000 UTC m=+142.981021004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.785646 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.785805 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.285782949 +0000 UTC m=+143.082142622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.786749 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.787058 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.287050313 +0000 UTC m=+143.083409976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.887828 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.888231 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.388217592 +0000 UTC m=+143.184577265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.937954 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" event={"ID":"c207fbab-618a-4c01-8450-cb7ffad0f50d","Type":"ContainerStarted","Data":"45d0dcabf7309955eec07c984211b0153a93c71d4bd539e96746cc726f45ee6e"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.941563 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"c508ed471b09484ed8523f8e356a0f549ffa702c91c1727787bcba35924470f5"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.948900 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" event={"ID":"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53","Type":"ContainerStarted","Data":"79935a8189f00e1c2abf01d3af7900b205fa684a847f59a814beb4bf0fa56ab1"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.964556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5gjk2" event={"ID":"43177e9e-03e6-4864-843a-c753a096648f","Type":"ContainerStarted","Data":"d8036201069ab2b54582c6855a8a2f6580544109879d60a237d24962a07d4390"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.966584 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" podStartSLOduration=123.966569253 podStartE2EDuration="2m3.966569253s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.964993921 +0000 UTC m=+142.761353594" watchObservedRunningTime="2026-01-21 06:37:32.966569253 +0000 UTC m=+142.762928926" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.983094 4913 generic.go:334] "Generic (PLEG): container finished" podID="0ee14186-f787-47f1-8537-8cb2210ac28c" containerID="f3fe3d7736d8b298c6f2f4d11775d277425b2a3b0b09aa001967af6ef48fa51a" exitCode=0 Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.983376 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" event={"ID":"0ee14186-f787-47f1-8537-8cb2210ac28c","Type":"ContainerDied","Data":"f3fe3d7736d8b298c6f2f4d11775d277425b2a3b0b09aa001967af6ef48fa51a"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.989442 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.990343 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.490328757 +0000 UTC m=+143.286688430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.033496 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" podStartSLOduration=124.033479829 podStartE2EDuration="2m4.033479829s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.007547506 +0000 UTC m=+142.803907179" watchObservedRunningTime="2026-01-21 06:37:33.033479829 +0000 UTC m=+142.829839502" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.041408 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" event={"ID":"e70bbe19-3e5b-4629-b9bf-3c6fc8072836","Type":"ContainerStarted","Data":"6743d0850e8e1324f70836fa62f6cf39fb99dbe70ec9e8f52f3477a56e851033"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.041446 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" event={"ID":"e70bbe19-3e5b-4629-b9bf-3c6fc8072836","Type":"ContainerStarted","Data":"a8b52555d9d79ffc19ea69145002ee5080bb823826ac273bd77aefd3171f5aaa"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.088539 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" podStartSLOduration=124.088522587 podStartE2EDuration="2m4.088522587s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.087027977 +0000 UTC m=+142.883387650" watchObservedRunningTime="2026-01-21 06:37:33.088522587 +0000 UTC m=+142.884882260" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.090261 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.091519 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.591499967 +0000 UTC m=+143.387859640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.093873 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" event={"ID":"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01","Type":"ContainerStarted","Data":"ad2afa26dfc05cd49f90b9eb7883d9eb727e5bb35328d31056e07b10aa772523"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.123921 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" podStartSLOduration=125.123888861 podStartE2EDuration="2m5.123888861s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.123884631 +0000 UTC m=+142.920244304" watchObservedRunningTime="2026-01-21 06:37:33.123888861 +0000 UTC m=+142.920248544" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.140830 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" event={"ID":"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32","Type":"ContainerStarted","Data":"d02d718c837a8f7099917d6936060ea5ae138fe7c06ab78e09e2967d3c09c62e"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.149134 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" event={"ID":"3f92f014-e88f-4e07-8f20-892e47c5de80","Type":"ContainerStarted","Data":"05dc613cfe664b4e4cdc251c60c45c9097c81d35531a511a12c398f60f81b38e"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.149348 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" event={"ID":"3f92f014-e88f-4e07-8f20-892e47c5de80","Type":"ContainerStarted","Data":"3cae0f6b28083d9730399cd3e1b262b23482281f8d43eb8e6bbc1e03316e9f99"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.150044 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.174431 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" podStartSLOduration=124.17441745 podStartE2EDuration="2m4.17441745s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.173058604 +0000 UTC m=+142.969418287" watchObservedRunningTime="2026-01-21 06:37:33.17441745 +0000 UTC m=+142.970777113" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.190896 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bkrnj" event={"ID":"6c174a67-522b-4d34-ba66-905ff560f206","Type":"ContainerStarted","Data":"66ef69324bb27af3b448434137010eea4e696b5b09913e45883abdf29604d80e"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.191203 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.191886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.193036 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.693023696 +0000 UTC m=+143.489383369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.202569 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" podStartSLOduration=124.202553421 podStartE2EDuration="2m4.202553421s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.200260979 +0000 UTC m=+142.996620652" watchObservedRunningTime="2026-01-21 06:37:33.202553421 +0000 UTC m=+142.998913094" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.204578 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" event={"ID":"57e1cc03-984e-4486-8393-f80bc1aa94af","Type":"ContainerStarted","Data":"f75d6d7d12b81f3561db9ca46abfdeb053f04afcb38d6a670b580f528e71a92d"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.220582 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" event={"ID":"48edf52b-d54b-4116-95d0-f8051704a4e3","Type":"ContainerStarted","Data":"e7514f43c271b4b19d20438633b628a8cc842725b49470e5f5aa9cb3fafe2297"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.226945 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bkrnj" podStartSLOduration=8.226931571 podStartE2EDuration="8.226931571s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.225938094 +0000 UTC m=+143.022297757" watchObservedRunningTime="2026-01-21 06:37:33.226931571 +0000 UTC m=+143.023291244" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.244246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" event={"ID":"8a371e85-6173-4802-976d-7ee68bc9afdc","Type":"ContainerStarted","Data":"6f5961288fb63653b054ead61f727c1e9dd9cf60e59a3b39f0bcb04f8c7b408f"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.244513 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.261800 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" event={"ID":"465393d8-5293-482f-8f3b-91578b3ba57b","Type":"ContainerStarted","Data":"e2878df36263cb8bb182118daa50ab3d3c75f9571223b72f6b37e78c27738677"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.261848 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" event={"ID":"465393d8-5293-482f-8f3b-91578b3ba57b","Type":"ContainerStarted","Data":"b9785da6eaa920126af79b7c788237c0592086a5842665c63b3e7789ce9793cd"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.276738 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:33 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:33 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:33 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.276836 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.278208 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerStarted","Data":"738d06bd8d6f5c4238e8c2c76b50d3599fa637b42354f2fb64db79311e1e4868"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.285359 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerStarted","Data":"dde2f74ae22d10dbe900066233ebccc8b5fb81dc49f3502b013b8f80fb564388"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.285402 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerStarted","Data":"9a8b45479c822246c3eaf9eef7a23feeb6fa1b2603e68390e7e90590fcc469e0"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.293768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.294174 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" podStartSLOduration=124.294159425 podStartE2EDuration="2m4.294159425s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.261920654 +0000 UTC m=+143.058280327" watchObservedRunningTime="2026-01-21 06:37:33.294159425 +0000 UTC m=+143.090519098" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.295063 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.795041909 +0000 UTC m=+143.591401582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.342864 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" event={"ID":"f6ca48b3-019f-4481-b136-7d392b7073d8","Type":"ContainerStarted","Data":"cc46aa6d0c5e46d72ccbfa48a20fefeb1f52df3122972a22d9d4ce26dd03a630"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.344199 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" podStartSLOduration=124.34418575 podStartE2EDuration="2m4.34418575s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.294395521 +0000 UTC m=+143.090755194" watchObservedRunningTime="2026-01-21 06:37:33.34418575 +0000 UTC m=+143.140545413" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.344498 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" podStartSLOduration=124.344491838 podStartE2EDuration="2m4.344491838s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.342550367 +0000 UTC m=+143.138910030" watchObservedRunningTime="2026-01-21 06:37:33.344491838 +0000 UTC m=+143.140851511" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.365730 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" event={"ID":"b1cbeb4c-0b76-4c39-ab17-18085750e8c2","Type":"ContainerStarted","Data":"157bb01bcd8a30b632547d11c3c7863ac2466c9b886b1eeafd1b770eaeaaa8c5"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.366871 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.366910 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.367370 4913 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qjrx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.367392 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.388874 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.402302 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.406165 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" podStartSLOduration=124.406148643 podStartE2EDuration="2m4.406148643s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.365429447 +0000 UTC m=+143.161789120" watchObservedRunningTime="2026-01-21 06:37:33.406148643 +0000 UTC m=+143.202508316" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.406985 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.409221 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.411216 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.435799 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.935783844 +0000 UTC m=+143.732143517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.443291 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-p4428" podStartSLOduration=124.443277734 podStartE2EDuration="2m4.443277734s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.442411071 +0000 UTC m=+143.238770744" watchObservedRunningTime="2026-01-21 06:37:33.443277734 +0000 UTC m=+143.239637407" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.474644 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.512311 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.514237 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.014216737 +0000 UTC m=+143.810576410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.563676 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" podStartSLOduration=124.563658487 podStartE2EDuration="2m4.563658487s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.537074938 +0000 UTC m=+143.333434611" watchObservedRunningTime="2026-01-21 06:37:33.563658487 +0000 UTC m=+143.360018160" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.594984 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.616263 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.616536 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.116524187 +0000 UTC m=+143.912883860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.665162 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" podStartSLOduration=124.665145885 podStartE2EDuration="2m4.665145885s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.606331725 +0000 UTC m=+143.402691398" watchObservedRunningTime="2026-01-21 06:37:33.665145885 +0000 UTC m=+143.461505558" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.698209 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" podStartSLOduration=124.698196597 podStartE2EDuration="2m4.698196597s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.696030619 +0000 UTC m=+143.492390292" watchObservedRunningTime="2026-01-21 06:37:33.698196597 +0000 UTC m=+143.494556270" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.719329 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.719462 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.219439654 +0000 UTC m=+144.015799317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.719540 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.719888 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.219881555 +0000 UTC m=+144.016241228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.800912 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.801800 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.823830 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824050 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.824095 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.324069326 +0000 UTC m=+144.120428999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824157 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824225 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824250 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.824693 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.324687312 +0000 UTC m=+144.121046985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.837102 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.864707 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.924805 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.924987 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.925140 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.425113042 +0000 UTC m=+144.221472715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.925324 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.925737 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.925806 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.926055 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.930556 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" podStartSLOduration=124.930539337 podStartE2EDuration="2m4.930539337s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.900919506 +0000 UTC m=+143.697279179" watchObservedRunningTime="2026-01-21 06:37:33.930539337 +0000 UTC m=+143.726899010" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.960483 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.963453 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.972314 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.980104 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.993579 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036454 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036497 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036522 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036572 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.036904 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.536889855 +0000 UTC m=+144.333249528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.118860 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139545 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139746 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139778 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139836 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.140301 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.140665 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.640651354 +0000 UTC m=+144.437011027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.140870 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.158874 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.159784 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.169856 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.177623 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.194111 4913 csr.go:261] certificate signing request csr-jv8rl is approved, waiting to be issued Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.203827 4913 csr.go:257] certificate signing request csr-jv8rl is issued Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251458 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251490 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251534 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251561 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.251841 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.75182961 +0000 UTC m=+144.548189283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.278485 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:34 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:34 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:34 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.278529 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.313688 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.352455 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.352947 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.352967 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.353011 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.353691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.353727 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.853697509 +0000 UTC m=+144.650057182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.354158 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.369674 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.371257 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.407176 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.411631 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.449522 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" event={"ID":"0ee14186-f787-47f1-8537-8cb2210ac28c","Type":"ContainerStarted","Data":"bb80fdbcab8c188c037de2c17ba9f3d47ee1a8f40de7d45b21922035248e39b5"} Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457296 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457326 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457349 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457415 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.458250 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.958240978 +0000 UTC m=+144.754600651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.499568 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"5575122f01690d43e5471873512af32675a183059f2884ee3b59f362a09960fe"} Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.500031 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.500064 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.518166 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.520532 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.548113 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" podStartSLOduration=125.548095516 podStartE2EDuration="2m5.548095516s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:34.478430067 +0000 UTC m=+144.274789740" watchObservedRunningTime="2026-01-21 06:37:34.548095516 +0000 UTC m=+144.344455189" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.564300 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.564388 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.06436907 +0000 UTC m=+144.860728753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.565200 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.565652 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.566422 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.566510 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.569984 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.06997048 +0000 UTC m=+144.866330143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.580033 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.587216 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.669064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.669490 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.169475605 +0000 UTC m=+144.965835268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.669489 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.711827 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.771672 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.772125 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.272114464 +0000 UTC m=+145.068474137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.803757 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.880095 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.880233 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.380206129 +0000 UTC m=+145.176565802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.880363 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.880670 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.3806579 +0000 UTC m=+145.177017573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.936836 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.981164 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.981486 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.481472011 +0000 UTC m=+145.277831684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.044996 4913 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.082933 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.083292 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.583279798 +0000 UTC m=+145.379639471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.163381 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.186553 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.186864 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.686849781 +0000 UTC m=+145.483209454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.205447 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 06:32:34 +0000 UTC, rotation deadline is 2026-12-08 09:42:02.361769364 +0000 UTC Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.205469 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7707h4m27.156302006s for next certificate rotation Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.275143 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:35 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:35 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:35 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.275194 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.285519 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.290432 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.290884 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.790869317 +0000 UTC m=+145.587229000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.394181 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.394408 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.894372019 +0000 UTC m=+145.690731692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.495553 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.495973 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.99595201 +0000 UTC m=+145.792311693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.504701 4913 generic.go:334] "Generic (PLEG): container finished" podID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" exitCode=0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.504755 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.504802 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerStarted","Data":"56ab7cdf728ac690777654ae4eaf5e6fc42307f0dee5ce8045bb907e80f0f634"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506035 4913 generic.go:334] "Generic (PLEG): container finished" podID="f2b20a33-f426-426f-9657-3d11d403629f" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" exitCode=0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506102 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506134 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerStarted","Data":"6a8e2ac63fb84aa47578d17a8198d55bdad0c3fb7a2896b7a8bd7e3526aa7149"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506317 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.507716 4913 generic.go:334] "Generic (PLEG): container finished" podID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" exitCode=0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.507768 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.507793 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerStarted","Data":"eb2a4164400078d5e47383eb8825b8a46cafb4407ff81311bae02795bf3351aa"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.513334 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"dbf01b660dcba306070351ff37808ae03dc308207dd1d10705edfc214b219fbb"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.517004 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerStarted","Data":"2b2da556d8d5ceb79d9f0ad50be41dd604bef4e604d018fead743630456fc287"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.597325 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.597408 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.097393877 +0000 UTC m=+145.893753540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.599548 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.599778 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.09976329 +0000 UTC m=+145.896122963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.696378 4913 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T06:37:35.045015787Z","Handler":null,"Name":""} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.699276 4913 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.699315 4913 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.700973 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.739566 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.746478 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.747837 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.750735 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.757519 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802847 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802906 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802962 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802983 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.805053 4913 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.805091 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.833558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.903790 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.903897 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.903927 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.904250 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.904683 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.927155 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.000228 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.069765 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.146065 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.147460 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.160948 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.208439 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.208503 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.209949 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.274972 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:36 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:36 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:36 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.275097 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.311733 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.311786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.311884 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.313738 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.315395 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.328498 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.343297 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:37:36 crc kubenswrapper[4913]: W0121 06:37:36.350200 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2255f06f_74ad_4308_9575_c04f8c24d4d5.slice/crio-9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883 WatchSource:0}: Error finding container 9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883: Status 404 returned error can't find the container with id 9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.415109 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.453071 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:37:36 crc kubenswrapper[4913]: W0121 06:37:36.458071 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf46fd64f_46cb_4464_8f26_6df55bf77ba1.slice/crio-608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5 WatchSource:0}: Error finding container 608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5: Status 404 returned error can't find the container with id 608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.465372 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.510480 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.511216 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.512814 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.513129 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.514939 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.516232 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.516273 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.516335 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.527058 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.527464 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.532112 4913 generic.go:334] "Generic (PLEG): container finished" podID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" exitCode=0 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.535229 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.542689 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.542805 4913 generic.go:334] "Generic (PLEG): container finished" podID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerID="738d06bd8d6f5c4238e8c2c76b50d3599fa637b42354f2fb64db79311e1e4868" exitCode=0 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543262 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543292 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerStarted","Data":"9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543308 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerStarted","Data":"608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543320 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerDied","Data":"738d06bd8d6f5c4238e8c2c76b50d3599fa637b42354f2fb64db79311e1e4868"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.551630 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.554256 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"0e0ca49b65d42705efaa888df39ebeec9faba1fa3577e89244b08af4547e4a1e"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.569880 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.586671 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" podStartSLOduration=11.586654236 podStartE2EDuration="11.586654236s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:36.579791752 +0000 UTC m=+146.376151445" watchObservedRunningTime="2026-01-21 06:37:36.586654236 +0000 UTC m=+146.383013909" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.601513 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.628555 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.628705 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.646459 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.733820 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.733917 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.734051 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.750522 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: W0121 06:37:36.781929 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a WatchSource:0}: Error finding container 0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a: Status 404 returned error can't find the container with id 0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.828465 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.834761 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.839428 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.882129 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.945243 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.031473 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.048759 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.108044 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda20e6104_9ef6_4f62_990b_e0d660e5b5c4.slice/crio-f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a WatchSource:0}: Error finding container f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a: Status 404 returned error can't find the container with id f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.112766 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7521412f_3363_4617_9740_9dd9124df38e.slice/crio-4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015 WatchSource:0}: Error finding container 4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015: Status 404 returned error can't find the container with id 4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.148254 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.149346 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.152777 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.156558 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.160357 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32 WatchSource:0}: Error finding container 368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32: Status 404 returned error can't find the container with id 368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.244584 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.244695 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.244715 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.278796 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:37 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:37 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:37 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.279087 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.279254 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wfcsc"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.348111 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.347011 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.348772 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.348804 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.349707 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.368814 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.396181 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1 WatchSource:0}: Error finding container d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1: Status 404 returned error can't find the container with id d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.435644 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.435719 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.442185 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.472647 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.488865 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.488901 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.491009 4913 patch_prober.go:28] interesting pod/console-f9d7485db-k6jdd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.491049 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k6jdd" podUID="08ac51dd-419d-4632-8a49-1972be301121" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.552364 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.554192 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.557292 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.581420 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerStarted","Data":"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.582169 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.596010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a20e6104-9ef6-4f62-990b-e0d660e5b5c4","Type":"ContainerStarted","Data":"f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.605665 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" podStartSLOduration=128.605569495 podStartE2EDuration="2m8.605569495s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:37.599146914 +0000 UTC m=+147.395506587" watchObservedRunningTime="2026-01-21 06:37:37.605569495 +0000 UTC m=+147.401929168" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.605792 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.611180 4913 generic.go:334] "Generic (PLEG): container finished" podID="7521412f-3363-4617-9740-9dd9124df38e" containerID="93e15fb5b03e79a08467b762e78a24c070dcc8c24e8f33b03e16ab6662aedb40" exitCode=0 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.611256 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"93e15fb5b03e79a08467b762e78a24c070dcc8c24e8f33b03e16ab6662aedb40"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.611281 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerStarted","Data":"4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.615431 4913 generic.go:334] "Generic (PLEG): container finished" podID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" exitCode=0 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.615498 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.623428 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.623465 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.627064 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" event={"ID":"60ed8982-ee20-4330-861f-61509c39bbe7","Type":"ContainerStarted","Data":"1888ea49e90b59ad215144de24326b4ef5ee1d769608893bb4af01982a17bb80"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.631036 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b0dbfaddaa6ec33143142e8918c05587640ff5e2a874ef83645d5bb9f7145e8f"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.631100 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.640656 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"61765a860322b4c086d638b92e091fc73dae3b3b3faacac51fdba49fb79bff32"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.640772 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.640787 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.641150 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.652726 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.654521 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.654564 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.654705 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.759146 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.759303 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.759362 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.760736 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.760948 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809271 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809322 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809333 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809393 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.815033 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.913912 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.104389 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.270024 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.281907 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:38 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:38 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:38 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.281951 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.296443 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.319303 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.319627 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.375067 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"0a7775c5-ca46-4ab1-b4e1-96c818301059\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.375136 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"0a7775c5-ca46-4ab1-b4e1-96c818301059\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.375234 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"0a7775c5-ca46-4ab1-b4e1-96c818301059\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.376769 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a7775c5-ca46-4ab1-b4e1-96c818301059" (UID: "0a7775c5-ca46-4ab1-b4e1-96c818301059"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.382437 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a7775c5-ca46-4ab1-b4e1-96c818301059" (UID: "0a7775c5-ca46-4ab1-b4e1-96c818301059"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.386697 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg" (OuterVolumeSpecName: "kube-api-access-97qxg") pod "0a7775c5-ca46-4ab1-b4e1-96c818301059" (UID: "0a7775c5-ca46-4ab1-b4e1-96c818301059"). InnerVolumeSpecName "kube-api-access-97qxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.476885 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.476929 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.476938 4913 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.606373 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.646421 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" event={"ID":"60ed8982-ee20-4330-861f-61509c39bbe7","Type":"ContainerStarted","Data":"be846a9ebb7209b98d835a31a04774bb1797438fc70428969ef1f316fb98ba32"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.646472 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" event={"ID":"60ed8982-ee20-4330-861f-61509c39bbe7","Type":"ContainerStarted","Data":"6bde92ee8efa1aa43a2ea494fcc9e1c3e7d5da9354502eac8a4ec5892221663e"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.648683 4913 generic.go:334] "Generic (PLEG): container finished" podID="d976374c-9adc-426a-9593-43e617e72281" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" exitCode=0 Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.648728 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.648743 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerStarted","Data":"cd166342c5c7d3828aa55b99bbc4cb3c9d3bdf94c3c49466b8128a155f8f51f9"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.651166 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerDied","Data":"8af3a3fe3dfde5bcb547f586da849f79867bd334c534af99562358101ad4a451"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.651198 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af3a3fe3dfde5bcb547f586da849f79867bd334c534af99562358101ad4a451" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.651235 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.657443 4913 generic.go:334] "Generic (PLEG): container finished" podID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerID="afa1950ea9d1488c78126bf10cbd77be0b61730b70277423566008c4a2b19495" exitCode=0 Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.657521 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a20e6104-9ef6-4f62-990b-e0d660e5b5c4","Type":"ContainerDied","Data":"afa1950ea9d1488c78126bf10cbd77be0b61730b70277423566008c4a2b19495"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.661047 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wfcsc" podStartSLOduration=130.66102891 podStartE2EDuration="2m10.66102891s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:38.659218902 +0000 UTC m=+148.455578575" watchObservedRunningTime="2026-01-21 06:37:38.66102891 +0000 UTC m=+148.457388583" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.672090 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"da0bb8f08d9b375c9f1e359a284b3459ccad7175074392fc51a7e212cf1c22e7"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.676480 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerStarted","Data":"8f5064bd05054c2b02632229ded6fedcb4045b72cf1e85d3555133283a45b0c3"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.685569 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.275173 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:39 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:39 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:39 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.275472 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.695673 4913 generic.go:334] "Generic (PLEG): container finished" podID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerID="4536887c323df45fbc4166635e0604a06736c4d0fb3091dd1489a3822a0f1cf4" exitCode=0 Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.695793 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"4536887c323df45fbc4166635e0604a06736c4d0fb3091dd1489a3822a0f1cf4"} Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.080825 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.222994 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.223102 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a20e6104-9ef6-4f62-990b-e0d660e5b5c4" (UID: "a20e6104-9ef6-4f62-990b-e0d660e5b5c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.223143 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.223337 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.227876 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a20e6104-9ef6-4f62-990b-e0d660e5b5c4" (UID: "a20e6104-9ef6-4f62-990b-e0d660e5b5c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.272412 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:40 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:40 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:40 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.272469 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.324179 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.782968 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a20e6104-9ef6-4f62-990b-e0d660e5b5c4","Type":"ContainerDied","Data":"f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a"} Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.783002 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.783026 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:41 crc kubenswrapper[4913]: I0121 06:37:41.271352 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:41 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:41 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:41 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:41 crc kubenswrapper[4913]: I0121 06:37:41.271418 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.273277 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:42 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:42 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:42 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.273347 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.519208 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:37:42 crc kubenswrapper[4913]: E0121 06:37:42.520040 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerName="collect-profiles" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520061 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerName="collect-profiles" Jan 21 06:37:42 crc kubenswrapper[4913]: E0121 06:37:42.520079 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerName="pruner" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520086 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerName="pruner" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520182 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerName="collect-profiles" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520196 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerName="pruner" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520645 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.522410 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.522561 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.542960 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.655672 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.655718 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.756682 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.756786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.756919 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.773691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.843080 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.844701 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.038898 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.199041 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:37:43 crc kubenswrapper[4913]: W0121 06:37:43.266497 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1319231d_e415_4c56_a0e2_7584edddc7e4.slice/crio-c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482 WatchSource:0}: Error finding container c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482: Status 404 returned error can't find the container with id c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482 Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.274777 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.277787 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.828014 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerStarted","Data":"c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482"} Jan 21 06:37:44 crc kubenswrapper[4913]: I0121 06:37:44.834675 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerStarted","Data":"f373a6052d782241dd48fe4cf32a660a6c768adbd887f17e18f4463268526fd8"} Jan 21 06:37:44 crc kubenswrapper[4913]: I0121 06:37:44.852389 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.852373817 podStartE2EDuration="2.852373817s" podCreationTimestamp="2026-01-21 06:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:44.84910836 +0000 UTC m=+154.645468033" watchObservedRunningTime="2026-01-21 06:37:44.852373817 +0000 UTC m=+154.648733490" Jan 21 06:37:45 crc kubenswrapper[4913]: I0121 06:37:45.842970 4913 generic.go:334] "Generic (PLEG): container finished" podID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerID="f373a6052d782241dd48fe4cf32a660a6c768adbd887f17e18f4463268526fd8" exitCode=0 Jan 21 06:37:45 crc kubenswrapper[4913]: I0121 06:37:45.843107 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerDied","Data":"f373a6052d782241dd48fe4cf32a660a6c768adbd887f17e18f4463268526fd8"} Jan 21 06:37:47 crc kubenswrapper[4913]: I0121 06:37:47.492026 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:47 crc kubenswrapper[4913]: I0121 06:37:47.495649 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:47 crc kubenswrapper[4913]: I0121 06:37:47.811958 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.872505 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerDied","Data":"c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482"} Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.872980 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.893777 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997295 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"1319231d-e415-4c56-a0e2-7584edddc7e4\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997369 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"1319231d-e415-4c56-a0e2-7584edddc7e4\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997424 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1319231d-e415-4c56-a0e2-7584edddc7e4" (UID: "1319231d-e415-4c56-a0e2-7584edddc7e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997746 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:51 crc kubenswrapper[4913]: I0121 06:37:51.003670 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1319231d-e415-4c56-a0e2-7584edddc7e4" (UID: "1319231d-e415-4c56-a0e2-7584edddc7e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:37:51 crc kubenswrapper[4913]: I0121 06:37:51.098868 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:51 crc kubenswrapper[4913]: I0121 06:37:51.878077 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:51 crc kubenswrapper[4913]: E0121 06:37:51.956322 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod1319231d_e415_4c56_a0e2_7584edddc7e4.slice\": RecentStats: unable to find data in memory cache]" Jan 21 06:37:56 crc kubenswrapper[4913]: I0121 06:37:56.008719 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:38:08 crc kubenswrapper[4913]: I0121 06:38:08.243345 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:38:08 crc kubenswrapper[4913]: I0121 06:38:08.319702 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:38:08 crc kubenswrapper[4913]: I0121 06:38:08.319784 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.325504 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:38:16 crc kubenswrapper[4913]: E0121 06:38:16.326460 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerName="pruner" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.326487 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerName="pruner" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.326813 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerName="pruner" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.328650 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.332711 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.333816 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.341129 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.510222 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.510356 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.611693 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.611870 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.612098 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.644971 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.662451 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:17 crc kubenswrapper[4913]: I0121 06:38:17.564506 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:38:20 crc kubenswrapper[4913]: E0121 06:38:20.031407 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 06:38:20 crc kubenswrapper[4913]: E0121 06:38:20.032034 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9dfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rm75l_openshift-marketplace(be61dd34-8d4d-4525-8187-3c21f22cd88a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:20 crc kubenswrapper[4913]: E0121 06:38:20.033280 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rm75l" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.514388 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.515326 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.522055 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.677273 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.677841 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.678271 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780091 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780153 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780219 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780226 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780343 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.799571 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.838412 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:23 crc kubenswrapper[4913]: E0121 06:38:23.257909 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 06:38:23 crc kubenswrapper[4913]: E0121 06:38:23.258405 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggtzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mvlq6_openshift-marketplace(f2b20a33-f426-426f-9657-3d11d403629f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:23 crc kubenswrapper[4913]: E0121 06:38:23.259888 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mvlq6" podUID="f2b20a33-f426-426f-9657-3d11d403629f" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.234152 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rm75l" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.234169 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mvlq6" podUID="f2b20a33-f426-426f-9657-3d11d403629f" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.326785 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.327096 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9zd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ffbwk_openshift-marketplace(92ab7368-d5ff-4ecc-846a-96791a313bce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.328411 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.771097 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.888779 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.888997 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkwc2_openshift-marketplace(14e729d1-3cb1-49d7-b34f-d997333ec65f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.890248 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.072682 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.317182 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.317318 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-956sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jlb56_openshift-marketplace(2255f06f-74ad-4308-9575-c04f8c24d4d5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.318484 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jlb56" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" Jan 21 06:38:29 crc kubenswrapper[4913]: I0121 06:38:29.500981 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:38:29 crc kubenswrapper[4913]: I0121 06:38:29.577074 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.606552 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.606797 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82n2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fszdj_openshift-marketplace(b5a378fe-18a6-4be0-8d56-eaddc377bd8b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.615744 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fszdj" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.042052 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.042648 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bn5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hpc4m_openshift-marketplace(d976374c-9adc-426a-9593-43e617e72281): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.044224 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.113130 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerStarted","Data":"0b206ec76a91256c0c91606cbe0925f94e7fbd4e7b6b747641a151b3beb320e9"} Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.113177 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerStarted","Data":"5cfc103b743cf4cd9f52146d725a2c25d6e49ba42c9012d9ddde5cfdedf47ef3"} Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.115578 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerStarted","Data":"71ed111272da72dc9748856e43d4d8004c600de64ea2bf42ebc31c9d37d07f49"} Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.115644 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerStarted","Data":"f6d03d89bb40acdb0afb436d8c01cd40d38c5fb443dddd3f086059c789e95db6"} Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.116975 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fszdj" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.117056 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jlb56" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.127841 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.154769 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.155023 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnkdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lpkw9_openshift-marketplace(7521412f-3363-4617-9740-9dd9124df38e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.156317 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lpkw9" podUID="7521412f-3363-4617-9740-9dd9124df38e" Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.165300 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.16528359 podStartE2EDuration="10.16528359s" podCreationTimestamp="2026-01-21 06:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:38:30.147028061 +0000 UTC m=+199.943387754" watchObservedRunningTime="2026-01-21 06:38:30.16528359 +0000 UTC m=+199.961643263" Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.181447 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=14.181423103 podStartE2EDuration="14.181423103s" podCreationTimestamp="2026-01-21 06:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:38:30.17404027 +0000 UTC m=+199.970399933" watchObservedRunningTime="2026-01-21 06:38:30.181423103 +0000 UTC m=+199.977782776" Jan 21 06:38:31 crc kubenswrapper[4913]: I0121 06:38:31.124550 4913 generic.go:334] "Generic (PLEG): container finished" podID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerID="71ed111272da72dc9748856e43d4d8004c600de64ea2bf42ebc31c9d37d07f49" exitCode=0 Jan 21 06:38:31 crc kubenswrapper[4913]: I0121 06:38:31.124803 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerDied","Data":"71ed111272da72dc9748856e43d4d8004c600de64ea2bf42ebc31c9d37d07f49"} Jan 21 06:38:31 crc kubenswrapper[4913]: E0121 06:38:31.129435 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lpkw9" podUID="7521412f-3363-4617-9740-9dd9124df38e" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.382961 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.446510 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.446664 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.448705 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d389cec5-c315-4a24-92fd-d5ed381b3b5f" (UID: "d389cec5-c315-4a24-92fd-d5ed381b3b5f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.452675 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d389cec5-c315-4a24-92fd-d5ed381b3b5f" (UID: "d389cec5-c315-4a24-92fd-d5ed381b3b5f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.547945 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.547984 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:33 crc kubenswrapper[4913]: I0121 06:38:33.136451 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerDied","Data":"f6d03d89bb40acdb0afb436d8c01cd40d38c5fb443dddd3f086059c789e95db6"} Jan 21 06:38:33 crc kubenswrapper[4913]: I0121 06:38:33.136854 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d03d89bb40acdb0afb436d8c01cd40d38c5fb443dddd3f086059c789e95db6" Jan 21 06:38:33 crc kubenswrapper[4913]: I0121 06:38:33.136529 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:37 crc kubenswrapper[4913]: I0121 06:38:37.157403 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerStarted","Data":"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4"} Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.164322 4913 generic.go:334] "Generic (PLEG): container finished" podID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" exitCode=0 Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.164358 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4"} Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.318858 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.319013 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.319066 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.320512 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.320813 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355" gracePeriod=600 Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.172297 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerStarted","Data":"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655"} Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.174547 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355" exitCode=0 Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.174653 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355"} Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.174682 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3"} Jan 21 06:38:40 crc kubenswrapper[4913]: I0121 06:38:40.183209 4913 generic.go:334] "Generic (PLEG): container finished" podID="f2b20a33-f426-426f-9657-3d11d403629f" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" exitCode=0 Jan 21 06:38:40 crc kubenswrapper[4913]: I0121 06:38:40.183281 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033"} Jan 21 06:38:40 crc kubenswrapper[4913]: I0121 06:38:40.205885 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rm75l" podStartSLOduration=4.038868056 podStartE2EDuration="1m6.205860447s" podCreationTimestamp="2026-01-21 06:37:34 +0000 UTC" firstStartedPulling="2026-01-21 06:37:36.53398838 +0000 UTC m=+146.330348053" lastFinishedPulling="2026-01-21 06:38:38.700980771 +0000 UTC m=+208.497340444" observedRunningTime="2026-01-21 06:38:40.201343734 +0000 UTC m=+209.997703407" watchObservedRunningTime="2026-01-21 06:38:40.205860447 +0000 UTC m=+210.002220160" Jan 21 06:38:41 crc kubenswrapper[4913]: I0121 06:38:41.193980 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerStarted","Data":"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108"} Jan 21 06:38:41 crc kubenswrapper[4913]: I0121 06:38:41.210560 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mvlq6" podStartSLOduration=2.980940076 podStartE2EDuration="1m8.210542955s" podCreationTimestamp="2026-01-21 06:37:33 +0000 UTC" firstStartedPulling="2026-01-21 06:37:35.506883181 +0000 UTC m=+145.303242854" lastFinishedPulling="2026-01-21 06:38:40.73648606 +0000 UTC m=+210.532845733" observedRunningTime="2026-01-21 06:38:41.209737422 +0000 UTC m=+211.006097095" watchObservedRunningTime="2026-01-21 06:38:41.210542955 +0000 UTC m=+211.006902618" Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.201212 4913 generic.go:334] "Generic (PLEG): container finished" podID="d976374c-9adc-426a-9593-43e617e72281" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" exitCode=0 Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.201280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be"} Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.203501 4913 generic.go:334] "Generic (PLEG): container finished" podID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" exitCode=0 Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.203539 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.208948 4913 generic.go:334] "Generic (PLEG): container finished" podID="7521412f-3363-4617-9740-9dd9124df38e" containerID="7df421d769e0135e3fb9a32354b2ade06ec971399dd6c8201985258f4e4a34b1" exitCode=0 Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.209010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"7df421d769e0135e3fb9a32354b2ade06ec971399dd6c8201985258f4e4a34b1"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.211245 4913 generic.go:334] "Generic (PLEG): container finished" podID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerID="a0c217ee06e4d0effa4b06e0042da74da4b4c664dbe6ca8ae4a8f377c3e40172" exitCode=0 Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.211313 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"a0c217ee06e4d0effa4b06e0042da74da4b4c664dbe6ca8ae4a8f377c3e40172"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.215466 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerStarted","Data":"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.220198 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerStarted","Data":"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.246709 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffbwk" podStartSLOduration=2.968081665 podStartE2EDuration="1m10.246690901s" podCreationTimestamp="2026-01-21 06:37:33 +0000 UTC" firstStartedPulling="2026-01-21 06:37:35.5060644 +0000 UTC m=+145.302424073" lastFinishedPulling="2026-01-21 06:38:42.784673636 +0000 UTC m=+212.581033309" observedRunningTime="2026-01-21 06:38:43.244479351 +0000 UTC m=+213.040839014" watchObservedRunningTime="2026-01-21 06:38:43.246690901 +0000 UTC m=+213.043050584" Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.275750 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpc4m" podStartSLOduration=2.197654708 podStartE2EDuration="1m6.275678865s" podCreationTimestamp="2026-01-21 06:37:37 +0000 UTC" firstStartedPulling="2026-01-21 06:37:38.658338788 +0000 UTC m=+148.454698461" lastFinishedPulling="2026-01-21 06:38:42.736362945 +0000 UTC m=+212.532722618" observedRunningTime="2026-01-21 06:38:43.27294137 +0000 UTC m=+213.069301033" watchObservedRunningTime="2026-01-21 06:38:43.275678865 +0000 UTC m=+213.072038538" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.119948 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.120322 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.191870 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.233562 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerStarted","Data":"374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11"} Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.236310 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerStarted","Data":"b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb"} Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.257135 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lpkw9" podStartSLOduration=2.283965613 podStartE2EDuration="1m8.257114616s" podCreationTimestamp="2026-01-21 06:37:36 +0000 UTC" firstStartedPulling="2026-01-21 06:37:37.619692302 +0000 UTC m=+147.416051975" lastFinishedPulling="2026-01-21 06:38:43.592841305 +0000 UTC m=+213.389200978" observedRunningTime="2026-01-21 06:38:44.255194413 +0000 UTC m=+214.051554086" watchObservedRunningTime="2026-01-21 06:38:44.257114616 +0000 UTC m=+214.053474289" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.283697 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkwc2" podStartSLOduration=3.4093494619999998 podStartE2EDuration="1m7.283673783s" podCreationTimestamp="2026-01-21 06:37:37 +0000 UTC" firstStartedPulling="2026-01-21 06:37:39.697644273 +0000 UTC m=+149.494003946" lastFinishedPulling="2026-01-21 06:38:43.571968594 +0000 UTC m=+213.368328267" observedRunningTime="2026-01-21 06:38:44.279990722 +0000 UTC m=+214.076350395" watchObservedRunningTime="2026-01-21 06:38:44.283673783 +0000 UTC m=+214.080033466" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.314085 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.314135 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.521124 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.521176 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.556078 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.242189 4913 generic.go:334] "Generic (PLEG): container finished" podID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" exitCode=0 Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.242276 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc"} Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.245224 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerStarted","Data":"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf"} Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.308822 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.350793 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" probeResult="failure" output=< Jan 21 06:38:45 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:38:45 crc kubenswrapper[4913]: > Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.198667 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.250319 4913 generic.go:334] "Generic (PLEG): container finished" podID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" exitCode=0 Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.250373 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf"} Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.465953 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.465999 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.544360 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.255638 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rm75l" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" containerID="cri-o://f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" gracePeriod=2 Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.472862 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.472924 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.921530 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.921911 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:48 crc kubenswrapper[4913]: I0121 06:38:48.513069 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" probeResult="failure" output=< Jan 21 06:38:48 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:38:48 crc kubenswrapper[4913]: > Jan 21 06:38:48 crc kubenswrapper[4913]: I0121 06:38:48.959114 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" probeResult="failure" output=< Jan 21 06:38:48 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:38:48 crc kubenswrapper[4913]: > Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.144635 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.165225 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"be61dd34-8d4d-4525-8187-3c21f22cd88a\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.165279 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"be61dd34-8d4d-4525-8187-3c21f22cd88a\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.165331 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"be61dd34-8d4d-4525-8187-3c21f22cd88a\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.166769 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities" (OuterVolumeSpecName: "utilities") pod "be61dd34-8d4d-4525-8187-3c21f22cd88a" (UID: "be61dd34-8d4d-4525-8187-3c21f22cd88a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.189117 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz" (OuterVolumeSpecName: "kube-api-access-n9dfz") pod "be61dd34-8d4d-4525-8187-3c21f22cd88a" (UID: "be61dd34-8d4d-4525-8187-3c21f22cd88a"). InnerVolumeSpecName "kube-api-access-n9dfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.267092 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.267125 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269835 4913 generic.go:334] "Generic (PLEG): container finished" podID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" exitCode=0 Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269890 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655"} Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269923 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"2b2da556d8d5ceb79d9f0ad50be41dd604bef4e604d018fead743630456fc287"} Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269944 4913 scope.go:117] "RemoveContainer" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.270086 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.283353 4913 scope.go:117] "RemoveContainer" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.300651 4913 scope.go:117] "RemoveContainer" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.328785 4913 scope.go:117] "RemoveContainer" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" Jan 21 06:38:49 crc kubenswrapper[4913]: E0121 06:38:49.329236 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655\": container with ID starting with f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655 not found: ID does not exist" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329274 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655"} err="failed to get container status \"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655\": rpc error: code = NotFound desc = could not find container \"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655\": container with ID starting with f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655 not found: ID does not exist" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329302 4913 scope.go:117] "RemoveContainer" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" Jan 21 06:38:49 crc kubenswrapper[4913]: E0121 06:38:49.329790 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4\": container with ID starting with 9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4 not found: ID does not exist" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329824 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4"} err="failed to get container status \"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4\": rpc error: code = NotFound desc = could not find container \"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4\": container with ID starting with 9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4 not found: ID does not exist" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329842 4913 scope.go:117] "RemoveContainer" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" Jan 21 06:38:49 crc kubenswrapper[4913]: E0121 06:38:49.330116 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea\": container with ID starting with ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea not found: ID does not exist" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.330142 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea"} err="failed to get container status \"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea\": rpc error: code = NotFound desc = could not find container \"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea\": container with ID starting with ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea not found: ID does not exist" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.693168 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be61dd34-8d4d-4525-8187-3c21f22cd88a" (UID: "be61dd34-8d4d-4525-8187-3c21f22cd88a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.772165 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.895238 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.897521 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:38:50 crc kubenswrapper[4913]: I0121 06:38:50.534538 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" path="/var/lib/kubelet/pods/be61dd34-8d4d-4525-8187-3c21f22cd88a/volumes" Jan 21 06:38:54 crc kubenswrapper[4913]: I0121 06:38:54.184097 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:54 crc kubenswrapper[4913]: I0121 06:38:54.351913 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:54 crc kubenswrapper[4913]: I0121 06:38:54.388002 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:56 crc kubenswrapper[4913]: I0121 06:38:56.534518 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:56 crc kubenswrapper[4913]: I0121 06:38:56.605357 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:38:56 crc kubenswrapper[4913]: I0121 06:38:56.956277 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lpkw9" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" containerID="cri-o://374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11" gracePeriod=2 Jan 21 06:38:57 crc kubenswrapper[4913]: I0121 06:38:57.552648 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:57 crc kubenswrapper[4913]: I0121 06:38:57.611919 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:57 crc kubenswrapper[4913]: I0121 06:38:57.980498 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.040901 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.970617 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerStarted","Data":"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c"} Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.973677 4913 generic.go:334] "Generic (PLEG): container finished" podID="7521412f-3363-4617-9740-9dd9124df38e" containerID="374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11" exitCode=0 Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.973728 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11"} Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.003505 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jlb56" podStartSLOduration=7.918960463 podStartE2EDuration="1m25.003483633s" podCreationTimestamp="2026-01-21 06:37:35 +0000 UTC" firstStartedPulling="2026-01-21 06:37:37.623935655 +0000 UTC m=+147.420295328" lastFinishedPulling="2026-01-21 06:38:54.708458835 +0000 UTC m=+224.504818498" observedRunningTime="2026-01-21 06:39:00.001468207 +0000 UTC m=+229.797827880" watchObservedRunningTime="2026-01-21 06:39:00.003483633 +0000 UTC m=+229.799843306" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.238960 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.416505 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"7521412f-3363-4617-9740-9dd9124df38e\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.416726 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"7521412f-3363-4617-9740-9dd9124df38e\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.416823 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"7521412f-3363-4617-9740-9dd9124df38e\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.417982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities" (OuterVolumeSpecName: "utilities") pod "7521412f-3363-4617-9740-9dd9124df38e" (UID: "7521412f-3363-4617-9740-9dd9124df38e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.422496 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv" (OuterVolumeSpecName: "kube-api-access-dnkdv") pod "7521412f-3363-4617-9740-9dd9124df38e" (UID: "7521412f-3363-4617-9740-9dd9124df38e"). InnerVolumeSpecName "kube-api-access-dnkdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.458288 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7521412f-3363-4617-9740-9dd9124df38e" (UID: "7521412f-3363-4617-9740-9dd9124df38e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.517801 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.517845 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.517857 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.797886 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.798186 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" containerID="cri-o://b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb" gracePeriod=2 Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.999112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015"} Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.999226 4913 scope.go:117] "RemoveContainer" containerID="374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.999415 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.027237 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.028774 4913 scope.go:117] "RemoveContainer" containerID="7df421d769e0135e3fb9a32354b2ade06ec971399dd6c8201985258f4e4a34b1" Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.031967 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.055586 4913 scope.go:117] "RemoveContainer" containerID="93e15fb5b03e79a08467b762e78a24c070dcc8c24e8f33b03e16ab6662aedb40" Jan 21 06:39:02 crc kubenswrapper[4913]: I0121 06:39:02.009289 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerStarted","Data":"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5"} Jan 21 06:39:02 crc kubenswrapper[4913]: I0121 06:39:02.536493 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7521412f-3363-4617-9740-9dd9124df38e" path="/var/lib/kubelet/pods/7521412f-3363-4617-9740-9dd9124df38e/volumes" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.020195 4913 generic.go:334] "Generic (PLEG): container finished" podID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerID="b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb" exitCode=0 Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.020295 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb"} Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.045768 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fszdj" podStartSLOduration=4.807489562 podStartE2EDuration="1m29.045744556s" podCreationTimestamp="2026-01-21 06:37:34 +0000 UTC" firstStartedPulling="2026-01-21 06:37:35.511663229 +0000 UTC m=+145.308022902" lastFinishedPulling="2026-01-21 06:38:59.749918213 +0000 UTC m=+229.546277896" observedRunningTime="2026-01-21 06:39:03.044700797 +0000 UTC m=+232.841060490" watchObservedRunningTime="2026-01-21 06:39:03.045744556 +0000 UTC m=+232.842104249" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.338040 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.458088 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"14e729d1-3cb1-49d7-b34f-d997333ec65f\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.458251 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"14e729d1-3cb1-49d7-b34f-d997333ec65f\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.458308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"14e729d1-3cb1-49d7-b34f-d997333ec65f\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.459356 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities" (OuterVolumeSpecName: "utilities") pod "14e729d1-3cb1-49d7-b34f-d997333ec65f" (UID: "14e729d1-3cb1-49d7-b34f-d997333ec65f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.462915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg" (OuterVolumeSpecName: "kube-api-access-5rcfg") pod "14e729d1-3cb1-49d7-b34f-d997333ec65f" (UID: "14e729d1-3cb1-49d7-b34f-d997333ec65f"). InnerVolumeSpecName "kube-api-access-5rcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.559138 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.559171 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.574373 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14e729d1-3cb1-49d7-b34f-d997333ec65f" (UID: "14e729d1-3cb1-49d7-b34f-d997333ec65f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.660777 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.028406 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"8f5064bd05054c2b02632229ded6fedcb4045b72cf1e85d3555133283a45b0c3"} Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.028469 4913 scope.go:117] "RemoveContainer" containerID="b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.028533 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.049939 4913 scope.go:117] "RemoveContainer" containerID="a0c217ee06e4d0effa4b06e0042da74da4b4c664dbe6ca8ae4a8f377c3e40172" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.079633 4913 scope.go:117] "RemoveContainer" containerID="4536887c323df45fbc4166635e0604a06736c4d0fb3091dd1489a3822a0f1cf4" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.095320 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.098823 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.534694 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" path="/var/lib/kubelet/pods/14e729d1-3cb1-49d7-b34f-d997333ec65f/volumes" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.712506 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.712574 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.756414 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:06 crc kubenswrapper[4913]: I0121 06:39:06.071093 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:06 crc kubenswrapper[4913]: I0121 06:39:06.071132 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:06 crc kubenswrapper[4913]: I0121 06:39:06.115367 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.104671 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.456474 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.692737 4913 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693019 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693097 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693131 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693028 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693181 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699240 4913 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699694 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699711 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699724 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerName="pruner" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699732 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerName="pruner" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699740 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699898 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699934 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699941 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699950 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699957 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699964 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699969 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699979 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700004 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700013 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700019 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700028 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700036 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700047 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700052 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700062 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700088 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700095 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700100 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700108 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700114 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700120 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700126 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700132 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700138 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700173 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700180 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700187 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700193 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700326 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700339 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700346 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700354 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700361 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerName="pruner" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700370 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700376 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700384 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700409 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700418 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.701899 4913 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.702307 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.705899 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.733377 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.813509 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.813949 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814100 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814231 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814346 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814372 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814412 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814431 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915808 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915853 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915896 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915920 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915935 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915965 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915970 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915990 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915999 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916033 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915993 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916011 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916015 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916080 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916156 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916205 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.030853 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:08 crc kubenswrapper[4913]: W0121 06:39:08.046493 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f WatchSource:0}: Error finding container 673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f: Status 404 returned error can't find the container with id 673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f Jan 21 06:39:08 crc kubenswrapper[4913]: E0121 06:39:08.049250 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cabb51f14d6c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,LastTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.059655 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerID="0b206ec76a91256c0c91606cbe0925f94e7fbd4e7b6b747641a151b3beb320e9" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.059778 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerDied","Data":"0b206ec76a91256c0c91606cbe0925f94e7fbd4e7b6b747641a151b3beb320e9"} Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.060388 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.060657 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.062109 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063276 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063919 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063943 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063951 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063958 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" exitCode=2 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063992 4913 scope.go:117] "RemoveContainer" containerID="52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.065034 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f"} Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.077093 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.081420 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb"} Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.082504 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.084245 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.312194 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.312669 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.313033 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435254 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435317 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" (UID: "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435329 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435376 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435395 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock" (OuterVolumeSpecName: "var-lock") pod "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" (UID: "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435769 4913 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435780 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.441940 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" (UID: "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.536791 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.088457 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.088477 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerDied","Data":"5cfc103b743cf4cd9f52146d725a2c25d6e49ba42c9012d9ddde5cfdedf47ef3"} Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.089006 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cfc103b743cf4cd9f52146d725a2c25d6e49ba42c9012d9ddde5cfdedf47ef3" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.141176 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.141708 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.145995 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.146806 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.147355 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.147791 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.148354 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243499 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243567 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243637 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243940 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.244001 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.344833 4913 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.344891 4913 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.344906 4913 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.530240 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.530685 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.531102 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.534450 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.065963 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cabb51f14d6c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,LastTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.100472 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" exitCode=0 Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.100528 4913 scope.go:117] "RemoveContainer" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.100695 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.101723 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.102180 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.103036 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.105188 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.106253 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.106492 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.123231 4913 scope.go:117] "RemoveContainer" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.137969 4913 scope.go:117] "RemoveContainer" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.157519 4913 scope.go:117] "RemoveContainer" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.173184 4913 scope.go:117] "RemoveContainer" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.191469 4913 scope.go:117] "RemoveContainer" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.217152 4913 scope.go:117] "RemoveContainer" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.218108 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\": container with ID starting with 4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31 not found: ID does not exist" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218163 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31"} err="failed to get container status \"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\": rpc error: code = NotFound desc = could not find container \"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\": container with ID starting with 4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218185 4913 scope.go:117] "RemoveContainer" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.218830 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\": container with ID starting with 5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32 not found: ID does not exist" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218924 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32"} err="failed to get container status \"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\": rpc error: code = NotFound desc = could not find container \"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\": container with ID starting with 5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218969 4913 scope.go:117] "RemoveContainer" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.219428 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\": container with ID starting with 75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019 not found: ID does not exist" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.219483 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019"} err="failed to get container status \"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\": rpc error: code = NotFound desc = could not find container \"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\": container with ID starting with 75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.219529 4913 scope.go:117] "RemoveContainer" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.220167 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\": container with ID starting with c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4 not found: ID does not exist" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220198 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4"} err="failed to get container status \"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\": rpc error: code = NotFound desc = could not find container \"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\": container with ID starting with c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220217 4913 scope.go:117] "RemoveContainer" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.220555 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\": container with ID starting with c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5 not found: ID does not exist" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220575 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5"} err="failed to get container status \"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\": rpc error: code = NotFound desc = could not find container \"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\": container with ID starting with c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220604 4913 scope.go:117] "RemoveContainer" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.220986 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\": container with ID starting with 6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204 not found: ID does not exist" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.221024 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204"} err="failed to get container status \"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\": rpc error: code = NotFound desc = could not find container \"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\": container with ID starting with 6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204 not found: ID does not exist" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.769117 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.769778 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.770362 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.771065 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.522562 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.523176 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.523743 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.524204 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.524642 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: I0121 06:39:17.524704 4913 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.525147 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.726334 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Jan 21 06:39:18 crc kubenswrapper[4913]: E0121 06:39:18.127606 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Jan 21 06:39:18 crc kubenswrapper[4913]: E0121 06:39:18.562124 4913 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" volumeName="registry-storage" Jan 21 06:39:18 crc kubenswrapper[4913]: E0121 06:39:18.929325 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Jan 21 06:39:20 crc kubenswrapper[4913]: E0121 06:39:20.530272 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Jan 21 06:39:20 crc kubenswrapper[4913]: I0121 06:39:20.532636 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:20 crc kubenswrapper[4913]: I0121 06:39:20.533361 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:20 crc kubenswrapper[4913]: I0121 06:39:20.534052 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:21 crc kubenswrapper[4913]: E0121 06:39:21.067151 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cabb51f14d6c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,LastTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.176713 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.176810 4913 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727" exitCode=1 Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.176858 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727"} Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.177631 4913 scope.go:117] "RemoveContainer" containerID="ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.178292 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.179032 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.179741 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.180738 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.527102 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.528436 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.529103 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.529461 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.529885 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.549984 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.550026 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:22 crc kubenswrapper[4913]: E0121 06:39:22.550635 4913 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.551292 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:22 crc kubenswrapper[4913]: W0121 06:39:22.580580 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb WatchSource:0}: Error finding container 2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb: Status 404 returned error can't find the container with id 2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.189483 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.189875 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c52c9ebf8626ba7e0921f8f4b7b3291277c38fee2ef91fe31b90732a25a7f81d"} Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.191404 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192120 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192708 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192932 4913 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e7036e262d91ff61126733fa5d492914b16196efae02cae4e0e6c5c5e13f0ac4" exitCode=0 Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192981 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e7036e262d91ff61126733fa5d492914b16196efae02cae4e0e6c5c5e13f0ac4"} Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb"} Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193196 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193305 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193320 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:23 crc kubenswrapper[4913]: E0121 06:39:23.193768 4913 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193864 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.197078 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.197518 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.197867 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.006205 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.014667 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201209 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c30d3d80bed8eb641fa894e2540ecb55078e3943f4489d95ae6649b34503b551"} Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201294 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2c9cf8f51a1594bb24855ce8083da2e718374c343be191f79305151b45f4e85"} Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201336 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cfc26a938aee341d29090998cf1db6d0a34b50f4f394028cc14a30369ab4b858"} Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201709 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210292 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6828f539998b6fca1d81781efa2153f2b4b210388de3575e0d28a396c2edd6f7"} Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210637 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f221e52caedca5d4d27564ec829e9a3215d870a203bb9a8724281a92a530e5e"} Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210641 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210669 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:27 crc kubenswrapper[4913]: I0121 06:39:27.552066 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:27 crc kubenswrapper[4913]: I0121 06:39:27.553103 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:27 crc kubenswrapper[4913]: I0121 06:39:27.562999 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:30 crc kubenswrapper[4913]: I0121 06:39:30.228922 4913 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:30 crc kubenswrapper[4913]: I0121 06:39:30.544485 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.250294 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.250335 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.250336 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.254183 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.257531 4913 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://cfc26a938aee341d29090998cf1db6d0a34b50f4f394028cc14a30369ab4b858" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.257573 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.255406 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.255433 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.259232 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.483454 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" containerID="cri-o://62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" gracePeriod=15 Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.921575 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041210 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041291 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041340 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041399 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041441 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041480 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041516 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041549 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041582 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041639 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041675 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041723 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041763 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041797 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.042875 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.042908 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.042990 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.044458 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.045457 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.048439 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.048856 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.049493 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.050113 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.050255 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.050478 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.051293 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr" (OuterVolumeSpecName: "kube-api-access-2d6jr") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "kube-api-access-2d6jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.052380 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.053956 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143794 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143898 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143917 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143931 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143966 4913 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143977 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143989 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143998 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144008 4913 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144018 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144032 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144042 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144051 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144061 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.265769 4913 generic.go:334] "Generic (PLEG): container finished" podID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" exitCode=0 Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267146 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267173 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.265884 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.265915 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerDied","Data":"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065"} Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267783 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerDied","Data":"e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab"} Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267829 4913 scope.go:117] "RemoveContainer" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.272815 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.324782 4913 scope.go:117] "RemoveContainer" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" Jan 21 06:39:33 crc kubenswrapper[4913]: E0121 06:39:33.325484 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065\": container with ID starting with 62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065 not found: ID does not exist" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.325535 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065"} err="failed to get container status \"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065\": rpc error: code = NotFound desc = could not find container \"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065\": container with ID starting with 62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065 not found: ID does not exist" Jan 21 06:39:38 crc kubenswrapper[4913]: I0121 06:39:38.341366 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.237114 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.434256 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.759644 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.853056 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 06:39:41 crc kubenswrapper[4913]: I0121 06:39:41.443861 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 06:39:41 crc kubenswrapper[4913]: I0121 06:39:41.767167 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 06:39:41 crc kubenswrapper[4913]: I0121 06:39:41.997234 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.037899 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.038255 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.178025 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.307699 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.335485 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.339733 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.403691 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.749977 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.832063 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.836910 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.979139 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.078983 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.342656 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.365504 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.416005 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.512253 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.521534 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.530785 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.580079 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.628225 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.687047 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.768292 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.865448 4913 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.959350 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.978773 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.990904 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.244571 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.274412 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.333166 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.345985 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.350675 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.402711 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.428656 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.498540 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.504849 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.587621 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.603557 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.837184 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.879633 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.895150 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.028519 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.193585 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.215466 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.215518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.359585 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.388992 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.460296 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.503760 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.527309 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.581521 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.587487 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.618483 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.619580 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.640656 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.643389 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.651130 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.736582 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.768898 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.781064 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.860830 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.866518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.929122 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.956322 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.985568 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.094883 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.117096 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.141118 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.144177 4913 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.148706 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.148683982 podStartE2EDuration="39.148683982s" podCreationTimestamp="2026-01-21 06:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:39:30.130254733 +0000 UTC m=+259.926614426" watchObservedRunningTime="2026-01-21 06:39:46.148683982 +0000 UTC m=+275.945043695" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.152855 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.152927 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.155380 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.160738 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.179385 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.179365984 podStartE2EDuration="16.179365984s" podCreationTimestamp="2026-01-21 06:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:39:46.176404202 +0000 UTC m=+275.972763885" watchObservedRunningTime="2026-01-21 06:39:46.179365984 +0000 UTC m=+275.975725667" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.336758 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.349324 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.425464 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.466969 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.525489 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.534392 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.534993 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" path="/var/lib/kubelet/pods/bd2a9afe-21be-43e4-970d-03daff0713a1/volumes" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.545443 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.595021 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.710559 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.733232 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.769310 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.797345 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.855970 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.883300 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.939131 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.947337 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.948631 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.958746 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.003907 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.076511 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.177235 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.178187 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.231049 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.306069 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.344698 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.442607 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.459251 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.587720 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.696918 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.777084 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.836429 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.882359 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.918250 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.955335 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.997487 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.002194 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.214478 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.230886 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.259417 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.325291 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.436427 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.458965 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.482947 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.540558 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.573788 4913 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.590932 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.649516 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.759840 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.793932 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.794143 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.864064 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.869177 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.987262 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.116843 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.228065 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.310676 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.356685 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.418556 4913 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.442651 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.467849 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.503138 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.628072 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.638772 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.661094 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.674430 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.866365 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.991905 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.079339 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.094469 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.110253 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.132683 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.260311 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.362380 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.390308 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.439179 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.452059 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.455341 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.499421 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.509465 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.522135 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.584646 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.707824 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.785028 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.787392 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.788925 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.844550 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.864614 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.999218 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.057959 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.270821 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.472014 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.493666 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.646875 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.675198 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.678836 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.727074 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.888517 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.960943 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.020462 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.051804 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.082943 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.124624 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.171477 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.174571 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.187521 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.223610 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.259709 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.372807 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.385222 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.458904 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.460708 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.536359 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.750696 4913 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.847483 4913 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.847849 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb" gracePeriod=5 Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.859036 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.888008 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.994760 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.018191 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.102538 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.179334 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.192015 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.241057 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.386318 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.389187 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.408116 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.409543 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.431643 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.498204 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.528042 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.583987 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.628397 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.632753 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.668275 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.753738 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.781268 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.800254 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.936076 4913 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.172822 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.250155 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.277414 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2"] Jan 21 06:39:54 crc kubenswrapper[4913]: E0121 06:39:54.277915 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.277967 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 06:39:54 crc kubenswrapper[4913]: E0121 06:39:54.278008 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerName="installer" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278025 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerName="installer" Jan 21 06:39:54 crc kubenswrapper[4913]: E0121 06:39:54.278052 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278068 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278336 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerName="installer" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278377 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278404 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.279229 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.284742 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.284800 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.284954 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.285749 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.286465 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.289501 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.289504 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291625 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291636 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291830 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291968 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.296290 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.317887 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.318823 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.331758 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.338431 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.364026 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2"] Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.402796 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440781 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440831 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-policies\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440889 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440924 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-dir\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440966 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440985 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441107 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wqx\" (UniqueName: \"kubernetes.io/projected/40d3280d-72fa-4b13-ba3a-94dda976ad5f-kube-api-access-j9wqx\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441249 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441284 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441330 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441388 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441456 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441485 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441520 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.453024 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.492052 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.504861 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543136 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wqx\" (UniqueName: \"kubernetes.io/projected/40d3280d-72fa-4b13-ba3a-94dda976ad5f-kube-api-access-j9wqx\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543209 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543237 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543268 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543291 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543321 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543350 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543395 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543425 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543470 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-policies\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543498 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543527 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-dir\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543621 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543659 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.546383 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-dir\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.547241 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.547241 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-policies\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.548100 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.557923 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.561856 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.563258 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.563313 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.563628 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.564129 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.564992 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.572457 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.573725 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.577020 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wqx\" (UniqueName: \"kubernetes.io/projected/40d3280d-72fa-4b13-ba3a-94dda976ad5f-kube-api-access-j9wqx\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.578899 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.616287 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.808356 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2"] Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.808854 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.811734 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.873958 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.922182 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.965516 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.007578 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.017385 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.140485 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.426021 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" event={"ID":"40d3280d-72fa-4b13-ba3a-94dda976ad5f","Type":"ContainerStarted","Data":"67466a9b6b6f2ca145f3464f8a49e5dd3bdf5481e9245013f6450d262d3e774d"} Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.426094 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" event={"ID":"40d3280d-72fa-4b13-ba3a-94dda976ad5f","Type":"ContainerStarted","Data":"ecf0f62c5f69dda3b63f27250d4153ae2523756bfdacde05ab9250c9f6935dbe"} Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.427649 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.450526 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" podStartSLOduration=48.450508518 podStartE2EDuration="48.450508518s" podCreationTimestamp="2026-01-21 06:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:39:55.447523075 +0000 UTC m=+285.243882758" watchObservedRunningTime="2026-01-21 06:39:55.450508518 +0000 UTC m=+285.246868191" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.735377 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.814185 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.828113 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.895123 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 06:39:56 crc kubenswrapper[4913]: I0121 06:39:56.113489 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 06:39:56 crc kubenswrapper[4913]: I0121 06:39:56.180858 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.445915 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.445994 4913 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb" exitCode=137 Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.446065 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.454391 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.454810 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.533972 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.548878 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.548926 4913 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4a4f6234-7e9e-4216-83f8-6dd33d1298d2" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.550538 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.550578 4913 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4a4f6234-7e9e-4216-83f8-6dd33d1298d2" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602166 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602251 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602266 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602380 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602444 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602471 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602545 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602554 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603298 4913 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603334 4913 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603353 4913 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603370 4913 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.614131 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.704478 4913 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:59 crc kubenswrapper[4913]: I0121 06:39:59.452374 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:40:00 crc kubenswrapper[4913]: I0121 06:40:00.537443 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.431581 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.432668 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" containerID="cri-o://791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.443641 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.443947 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fszdj" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" containerID="cri-o://b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.460669 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.461084 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mvlq6" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" containerID="cri-o://f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.482086 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.482571 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" containerID="cri-o://ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.491634 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.491869 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jlb56" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" containerID="cri-o://10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.496719 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mmmzm"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.497755 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.503439 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.504410 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" containerID="cri-o://f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.504673 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mmmzm"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.600240 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.600448 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xhv\" (UniqueName: \"kubernetes.io/projected/9850b956-f0a1-4e29-b5c2-703b0aa7b697-kube-api-access-m8xhv\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.600616 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.702085 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.702140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xhv\" (UniqueName: \"kubernetes.io/projected/9850b956-f0a1-4e29-b5c2-703b0aa7b697-kube-api-access-m8xhv\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.702175 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.703924 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.707670 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.722906 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xhv\" (UniqueName: \"kubernetes.io/projected/9850b956-f0a1-4e29-b5c2-703b0aa7b697-kube-api-access-m8xhv\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.819171 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.952316 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.957611 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.965146 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.973014 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.974098 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112878 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112926 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"2255f06f-74ad-4308-9575-c04f8c24d4d5\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112967 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112994 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"2255f06f-74ad-4308-9575-c04f8c24d4d5\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113029 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"f2b20a33-f426-426f-9657-3d11d403629f\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113059 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113093 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"d976374c-9adc-426a-9593-43e617e72281\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113143 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"f2b20a33-f426-426f-9657-3d11d403629f\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113186 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"d976374c-9adc-426a-9593-43e617e72281\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113215 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113243 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"f2b20a33-f426-426f-9657-3d11d403629f\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113268 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"d976374c-9adc-426a-9593-43e617e72281\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113290 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"2255f06f-74ad-4308-9575-c04f8c24d4d5\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113315 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113747 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f3e3e7a7-a59e-4d12-8499-38ad4a72832d" (UID: "f3e3e7a7-a59e-4d12-8499-38ad4a72832d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113903 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities" (OuterVolumeSpecName: "utilities") pod "2255f06f-74ad-4308-9575-c04f8c24d4d5" (UID: "2255f06f-74ad-4308-9575-c04f8c24d4d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.114454 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities" (OuterVolumeSpecName: "utilities") pod "f2b20a33-f426-426f-9657-3d11d403629f" (UID: "f2b20a33-f426-426f-9657-3d11d403629f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.114632 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities" (OuterVolumeSpecName: "utilities") pod "d976374c-9adc-426a-9593-43e617e72281" (UID: "d976374c-9adc-426a-9593-43e617e72281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.117143 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities" (OuterVolumeSpecName: "utilities") pod "b5a378fe-18a6-4be0-8d56-eaddc377bd8b" (UID: "b5a378fe-18a6-4be0-8d56-eaddc377bd8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.117682 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm" (OuterVolumeSpecName: "kube-api-access-pqstm") pod "f3e3e7a7-a59e-4d12-8499-38ad4a72832d" (UID: "f3e3e7a7-a59e-4d12-8499-38ad4a72832d"). InnerVolumeSpecName "kube-api-access-pqstm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.120982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp" (OuterVolumeSpecName: "kube-api-access-956sp") pod "2255f06f-74ad-4308-9575-c04f8c24d4d5" (UID: "2255f06f-74ad-4308-9575-c04f8c24d4d5"). InnerVolumeSpecName "kube-api-access-956sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121068 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c" (OuterVolumeSpecName: "kube-api-access-8bn5c") pod "d976374c-9adc-426a-9593-43e617e72281" (UID: "d976374c-9adc-426a-9593-43e617e72281"). InnerVolumeSpecName "kube-api-access-8bn5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121140 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg" (OuterVolumeSpecName: "kube-api-access-ggtzg") pod "f2b20a33-f426-426f-9657-3d11d403629f" (UID: "f2b20a33-f426-426f-9657-3d11d403629f"). InnerVolumeSpecName "kube-api-access-ggtzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121194 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p" (OuterVolumeSpecName: "kube-api-access-82n2p") pod "b5a378fe-18a6-4be0-8d56-eaddc377bd8b" (UID: "b5a378fe-18a6-4be0-8d56-eaddc377bd8b"). InnerVolumeSpecName "kube-api-access-82n2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121340 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f3e3e7a7-a59e-4d12-8499-38ad4a72832d" (UID: "f3e3e7a7-a59e-4d12-8499-38ad4a72832d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.146520 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2255f06f-74ad-4308-9575-c04f8c24d4d5" (UID: "2255f06f-74ad-4308-9575-c04f8c24d4d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.180735 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5a378fe-18a6-4be0-8d56-eaddc377bd8b" (UID: "b5a378fe-18a6-4be0-8d56-eaddc377bd8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.184226 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2b20a33-f426-426f-9657-3d11d403629f" (UID: "f2b20a33-f426-426f-9657-3d11d403629f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219242 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219284 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219297 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219306 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219315 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219323 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219330 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219338 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219375 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219385 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219397 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219409 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219420 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219432 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.230032 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.240170 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mmmzm"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.285529 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d976374c-9adc-426a-9593-43e617e72281" (UID: "d976374c-9adc-426a-9593-43e617e72281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320409 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"92ab7368-d5ff-4ecc-846a-96791a313bce\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320487 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"92ab7368-d5ff-4ecc-846a-96791a313bce\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320574 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"92ab7368-d5ff-4ecc-846a-96791a313bce\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320835 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.321556 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities" (OuterVolumeSpecName: "utilities") pod "92ab7368-d5ff-4ecc-846a-96791a313bce" (UID: "92ab7368-d5ff-4ecc-846a-96791a313bce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.323329 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8" (OuterVolumeSpecName: "kube-api-access-r9zd8") pod "92ab7368-d5ff-4ecc-846a-96791a313bce" (UID: "92ab7368-d5ff-4ecc-846a-96791a313bce"). InnerVolumeSpecName "kube-api-access-r9zd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.371139 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92ab7368-d5ff-4ecc-846a-96791a313bce" (UID: "92ab7368-d5ff-4ecc-846a-96791a313bce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.422085 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.422121 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.422134 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503762 4913 generic.go:334] "Generic (PLEG): container finished" podID="f2b20a33-f426-426f-9657-3d11d403629f" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503796 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503840 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"6a8e2ac63fb84aa47578d17a8198d55bdad0c3fb7a2896b7a8bd7e3526aa7149"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503851 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503870 4913 scope.go:117] "RemoveContainer" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507561 4913 generic.go:334] "Generic (PLEG): container finished" podID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507617 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507653 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"eb2a4164400078d5e47383eb8825b8a46cafb4407ff81311bae02795bf3351aa"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507672 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510201 4913 generic.go:334] "Generic (PLEG): container finished" podID="d976374c-9adc-426a-9593-43e617e72281" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510241 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510272 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510294 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"cd166342c5c7d3828aa55b99bbc4cb3c9d3bdf94c3c49466b8128a155f8f51f9"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512553 4913 generic.go:334] "Generic (PLEG): container finished" podID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512627 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerDied","Data":"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512646 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerDied","Data":"cb3977af5e68023242bf0ddc97686fb8058507b9de52582bb7d762e6b09403d5"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512706 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.517823 4913 generic.go:334] "Generic (PLEG): container finished" podID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.518074 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.518754 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.518822 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.529114 4913 generic.go:334] "Generic (PLEG): container finished" podID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.529311 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.536276 4913 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mmmzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.536318 4913 scope.go:117] "RemoveContainer" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.536344 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" podUID="9850b956-f0a1-4e29-b5c2-703b0aa7b697" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560027 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560076 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560106 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"56ab7cdf728ac690777654ae4eaf5e6fc42307f0dee5ce8045bb907e80f0f634"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560124 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" event={"ID":"9850b956-f0a1-4e29-b5c2-703b0aa7b697","Type":"ContainerStarted","Data":"73c3d3da53f3d4e78dea8b18514995ad46e59b992a6bdecd72154ecb5e4a4cde"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560164 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" event={"ID":"9850b956-f0a1-4e29-b5c2-703b0aa7b697","Type":"ContainerStarted","Data":"3fafd9712950a6993939b2c1d355ca5c639733783d48611accc373dd03498d3f"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.565167 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" podStartSLOduration=1.5651457020000001 podStartE2EDuration="1.565145702s" podCreationTimestamp="2026-01-21 06:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:06.561450949 +0000 UTC m=+296.357810632" watchObservedRunningTime="2026-01-21 06:40:06.565145702 +0000 UTC m=+296.361505395" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.607124 4913 scope.go:117] "RemoveContainer" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.625656 4913 scope.go:117] "RemoveContainer" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.627791 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108\": container with ID starting with f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108 not found: ID does not exist" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.627835 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108"} err="failed to get container status \"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108\": rpc error: code = NotFound desc = could not find container \"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108\": container with ID starting with f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.627865 4913 scope.go:117] "RemoveContainer" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.628084 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033\": container with ID starting with 7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033 not found: ID does not exist" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628100 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033"} err="failed to get container status \"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033\": rpc error: code = NotFound desc = could not find container \"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033\": container with ID starting with 7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628113 4913 scope.go:117] "RemoveContainer" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.628622 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9\": container with ID starting with 6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9 not found: ID does not exist" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628672 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9"} err="failed to get container status \"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9\": rpc error: code = NotFound desc = could not find container \"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9\": container with ID starting with 6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628738 4913 scope.go:117] "RemoveContainer" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.635878 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.639823 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.647107 4913 scope.go:117] "RemoveContainer" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.664628 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.671035 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.674834 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.678611 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.682685 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.686022 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.686360 4913 scope.go:117] "RemoveContainer" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.692254 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.697628 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.701198 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.704769 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.709082 4913 scope.go:117] "RemoveContainer" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.710306 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5\": container with ID starting with b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5 not found: ID does not exist" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.710405 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5"} err="failed to get container status \"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5\": rpc error: code = NotFound desc = could not find container \"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5\": container with ID starting with b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.710508 4913 scope.go:117] "RemoveContainer" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.711822 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf\": container with ID starting with eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf not found: ID does not exist" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.711861 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf"} err="failed to get container status \"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf\": rpc error: code = NotFound desc = could not find container \"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf\": container with ID starting with eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.711903 4913 scope.go:117] "RemoveContainer" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.712427 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635\": container with ID starting with c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635 not found: ID does not exist" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.712533 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635"} err="failed to get container status \"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635\": rpc error: code = NotFound desc = could not find container \"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635\": container with ID starting with c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.712620 4913 scope.go:117] "RemoveContainer" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.726885 4913 scope.go:117] "RemoveContainer" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.740309 4913 scope.go:117] "RemoveContainer" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.757937 4913 scope.go:117] "RemoveContainer" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.758372 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9\": container with ID starting with f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9 not found: ID does not exist" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.758465 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9"} err="failed to get container status \"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9\": rpc error: code = NotFound desc = could not find container \"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9\": container with ID starting with f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.758580 4913 scope.go:117] "RemoveContainer" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.758958 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be\": container with ID starting with 79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be not found: ID does not exist" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759036 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be"} err="failed to get container status \"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be\": rpc error: code = NotFound desc = could not find container \"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be\": container with ID starting with 79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759101 4913 scope.go:117] "RemoveContainer" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.759663 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9\": container with ID starting with a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9 not found: ID does not exist" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759708 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9"} err="failed to get container status \"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9\": rpc error: code = NotFound desc = could not find container \"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9\": container with ID starting with a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759754 4913 scope.go:117] "RemoveContainer" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.773639 4913 scope.go:117] "RemoveContainer" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.774029 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3\": container with ID starting with ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3 not found: ID does not exist" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.774123 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3"} err="failed to get container status \"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3\": rpc error: code = NotFound desc = could not find container \"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3\": container with ID starting with ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.774192 4913 scope.go:117] "RemoveContainer" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.787934 4913 scope.go:117] "RemoveContainer" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.799907 4913 scope.go:117] "RemoveContainer" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812034 4913 scope.go:117] "RemoveContainer" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.812419 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c\": container with ID starting with 10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c not found: ID does not exist" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812446 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c"} err="failed to get container status \"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c\": rpc error: code = NotFound desc = could not find container \"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c\": container with ID starting with 10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812472 4913 scope.go:117] "RemoveContainer" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.812932 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc\": container with ID starting with d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc not found: ID does not exist" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812971 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc"} err="failed to get container status \"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc\": rpc error: code = NotFound desc = could not find container \"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc\": container with ID starting with d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.813003 4913 scope.go:117] "RemoveContainer" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.813553 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7\": container with ID starting with afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7 not found: ID does not exist" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.813575 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7"} err="failed to get container status \"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7\": rpc error: code = NotFound desc = could not find container \"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7\": container with ID starting with afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.813613 4913 scope.go:117] "RemoveContainer" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.824815 4913 scope.go:117] "RemoveContainer" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.840538 4913 scope.go:117] "RemoveContainer" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.855735 4913 scope.go:117] "RemoveContainer" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.856088 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2\": container with ID starting with 791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2 not found: ID does not exist" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856127 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2"} err="failed to get container status \"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2\": rpc error: code = NotFound desc = could not find container \"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2\": container with ID starting with 791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856154 4913 scope.go:117] "RemoveContainer" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.856654 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28\": container with ID starting with dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28 not found: ID does not exist" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856691 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28"} err="failed to get container status \"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28\": rpc error: code = NotFound desc = could not find container \"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28\": container with ID starting with dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856754 4913 scope.go:117] "RemoveContainer" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.856986 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148\": container with ID starting with 1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148 not found: ID does not exist" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.857011 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148"} err="failed to get container status \"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148\": rpc error: code = NotFound desc = could not find container \"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148\": container with ID starting with 1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148 not found: ID does not exist" Jan 21 06:40:07 crc kubenswrapper[4913]: I0121 06:40:07.558264 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.548762 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" path="/var/lib/kubelet/pods/2255f06f-74ad-4308-9575-c04f8c24d4d5/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.550946 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" path="/var/lib/kubelet/pods/92ab7368-d5ff-4ecc-846a-96791a313bce/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.552846 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" path="/var/lib/kubelet/pods/b5a378fe-18a6-4be0-8d56-eaddc377bd8b/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.555464 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d976374c-9adc-426a-9593-43e617e72281" path="/var/lib/kubelet/pods/d976374c-9adc-426a-9593-43e617e72281/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.556959 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b20a33-f426-426f-9657-3d11d403629f" path="/var/lib/kubelet/pods/f2b20a33-f426-426f-9657-3d11d403629f/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.559277 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" path="/var/lib/kubelet/pods/f3e3e7a7-a59e-4d12-8499-38ad4a72832d/volumes" Jan 21 06:40:10 crc kubenswrapper[4913]: I0121 06:40:10.361731 4913 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 06:40:14 crc kubenswrapper[4913]: I0121 06:40:14.949552 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.366513 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.367347 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" containerID="cri-o://e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5" gracePeriod=30 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.460116 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.460533 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" containerID="cri-o://a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b" gracePeriod=30 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.626955 4913 generic.go:334] "Generic (PLEG): container finished" podID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerID="e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5" exitCode=0 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.627028 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerDied","Data":"e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5"} Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.628869 4913 generic.go:334] "Generic (PLEG): container finished" podID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerID="a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b" exitCode=0 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.628913 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerDied","Data":"a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b"} Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.739165 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.815723 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918530 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918600 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918637 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918681 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918704 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918720 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918734 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918759 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918839 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.919352 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.919825 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.920050 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config" (OuterVolumeSpecName: "config") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.920114 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config" (OuterVolumeSpecName: "config") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.920643 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca" (OuterVolumeSpecName: "client-ca") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.924471 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.924621 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.925106 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t" (OuterVolumeSpecName: "kube-api-access-bqr5t") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "kube-api-access-bqr5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.925502 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq" (OuterVolumeSpecName: "kube-api-access-nhjlq") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "kube-api-access-nhjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.020867 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021014 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021091 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021157 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021216 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021271 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021326 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021629 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021697 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559000 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559309 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559339 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559382 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559389 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559395 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559402 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559429 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559436 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559443 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559449 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559473 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559479 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559522 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559529 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559536 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559543 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559551 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559556 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559565 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559571 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559578 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559599 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559606 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559612 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559624 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559629 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559638 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559644 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559651 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559657 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559665 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559671 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559678 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559684 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559691 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559697 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559786 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559796 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559808 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559815 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559823 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559830 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559838 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559846 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.561426 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.567527 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.568272 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.579134 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.579184 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.635636 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerDied","Data":"67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc"} Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.635662 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.635682 4913 scope.go:117] "RemoveContainer" containerID="e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.637908 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerDied","Data":"ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea"} Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.637948 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.653509 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.656952 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.657017 4913 scope.go:117] "RemoveContainer" containerID="a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.666296 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.666498 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728011 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728273 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728373 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728518 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728637 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728719 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728794 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728867 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728957 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.830486 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.831409 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.832198 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.832569 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.833538 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.834515 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.835883 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.833223 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.832121 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.835435 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.836037 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.834470 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.836326 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.836886 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.837924 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.846090 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.851062 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.862202 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.919991 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.930951 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.990069 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.150358 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:21 crc kubenswrapper[4913]: W0121 06:40:21.160334 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod613ee342_c0db_4722_92fa_633a60ecbb41.slice/crio-e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51 WatchSource:0}: Error finding container e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51: Status 404 returned error can't find the container with id e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51 Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.180337 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:21 crc kubenswrapper[4913]: W0121 06:40:21.190365 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf688d1c5_45f1_4e55_a987_df6cf2b954f4.slice/crio-31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15 WatchSource:0}: Error finding container 31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15: Status 404 returned error can't find the container with id 31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15 Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.645246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerStarted","Data":"85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.645599 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerStarted","Data":"31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.647309 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.649390 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerStarted","Data":"11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.649435 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerStarted","Data":"e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.649970 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.651798 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.656203 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.682073 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" podStartSLOduration=2.682056076 podStartE2EDuration="2.682056076s" podCreationTimestamp="2026-01-21 06:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:21.668103879 +0000 UTC m=+311.464463552" watchObservedRunningTime="2026-01-21 06:40:21.682056076 +0000 UTC m=+311.478415749" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.724700 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" podStartSLOduration=2.724684959 podStartE2EDuration="2.724684959s" podCreationTimestamp="2026-01-21 06:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:21.684159514 +0000 UTC m=+311.480519187" watchObservedRunningTime="2026-01-21 06:40:21.724684959 +0000 UTC m=+311.521044632" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.956804 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 06:40:22 crc kubenswrapper[4913]: I0121 06:40:22.532697 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" path="/var/lib/kubelet/pods/527ef351-fb35-4f58-ae7b-d410c23496c6/volumes" Jan 21 06:40:22 crc kubenswrapper[4913]: I0121 06:40:22.533502 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" path="/var/lib/kubelet/pods/82ebe95b-4e82-49aa-8693-52c0998ec7de/volumes" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.240536 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.241001 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" containerID="cri-o://11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc" gracePeriod=30 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.258027 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.258254 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" containerID="cri-o://85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a" gracePeriod=30 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.685089 4913 generic.go:334] "Generic (PLEG): container finished" podID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerID="85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a" exitCode=0 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.685159 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerDied","Data":"85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a"} Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.686652 4913 generic.go:334] "Generic (PLEG): container finished" podID="613ee342-c0db-4722-92fa-633a60ecbb41" containerID="11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc" exitCode=0 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.686670 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerDied","Data":"11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc"} Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.792748 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915533 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915575 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915618 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915653 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.916294 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.916396 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config" (OuterVolumeSpecName: "config") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.920443 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs" (OuterVolumeSpecName: "kube-api-access-xhqzs") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "kube-api-access-xhqzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.920519 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.927887 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016916 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016945 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016955 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016963 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118331 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118382 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118434 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118471 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118493 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.119254 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca" (OuterVolumeSpecName: "client-ca") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.119296 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.119467 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config" (OuterVolumeSpecName: "config") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.124037 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.124119 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8" (OuterVolumeSpecName: "kube-api-access-45sb8") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "kube-api-access-45sb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220166 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220234 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220245 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220253 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220262 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.693737 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerDied","Data":"31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15"} Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.693786 4913 scope.go:117] "RemoveContainer" containerID="85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.694552 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.695707 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.695709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerDied","Data":"e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51"} Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.714750 4913 scope.go:117] "RemoveContainer" containerID="11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.726395 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.734072 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.745305 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.750876 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.537903 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" path="/var/lib/kubelet/pods/613ee342-c0db-4722-92fa-633a60ecbb41/volumes" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.539358 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" path="/var/lib/kubelet/pods/f688d1c5-45f1-4e55-a987-df6cf2b954f4/volumes" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.566799 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:40:28 crc kubenswrapper[4913]: E0121 06:40:28.567223 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567263 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: E0121 06:40:28.567312 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567331 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567532 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567579 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.568223 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571096 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571425 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571708 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571977 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.572215 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.572401 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.572628 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.573279 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.576686 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.576916 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577771 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577809 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577880 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577902 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.578065 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.588713 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.592632 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.739032 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741106 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741283 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741462 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741696 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741877 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.742043 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.742234 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.742395 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843578 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843726 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843808 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843845 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843906 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843950 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843984 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.844018 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.844055 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.845558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.846699 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.848164 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.848564 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.848839 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.850972 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.853055 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.869581 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.870986 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.894965 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.903812 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.126577 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:40:29 crc kubenswrapper[4913]: W0121 06:40:29.134219 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799094d2_a718_4044_b16d_8a011cc3ecaa.slice/crio-31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5 WatchSource:0}: Error finding container 31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5: Status 404 returned error can't find the container with id 31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5 Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.388385 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:29 crc kubenswrapper[4913]: W0121 06:40:29.396000 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b31176e_1ac2_453f_8750_e2524da5cb9b.slice/crio-d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833 WatchSource:0}: Error finding container d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833: Status 404 returned error can't find the container with id d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833 Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.711137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerStarted","Data":"3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.711435 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerStarted","Data":"d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.711454 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.715727 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerStarted","Data":"f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.715788 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerStarted","Data":"31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.716031 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.721928 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.734269 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" podStartSLOduration=3.734246194 podStartE2EDuration="3.734246194s" podCreationTimestamp="2026-01-21 06:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:29.728751221 +0000 UTC m=+319.525110934" watchObservedRunningTime="2026-01-21 06:40:29.734246194 +0000 UTC m=+319.530605897" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.753116 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" podStartSLOduration=3.753096656 podStartE2EDuration="3.753096656s" podCreationTimestamp="2026-01-21 06:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:29.75033551 +0000 UTC m=+319.546695183" watchObservedRunningTime="2026-01-21 06:40:29.753096656 +0000 UTC m=+319.549456319" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.889459 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:30 crc kubenswrapper[4913]: I0121 06:40:30.425916 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 06:40:38 crc kubenswrapper[4913]: I0121 06:40:38.319190 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:40:38 crc kubenswrapper[4913]: I0121 06:40:38.319896 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.403325 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.404200 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" containerID="cri-o://3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08" gracePeriod=30 Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.773001 4913 generic.go:334] "Generic (PLEG): container finished" podID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerID="3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08" exitCode=0 Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.773089 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerDied","Data":"3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08"} Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.532174 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.571865 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h"] Jan 21 06:40:40 crc kubenswrapper[4913]: E0121 06:40:40.572150 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.572168 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.572295 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.572767 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591660 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-config\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591737 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4412c4f-3b57-428e-8257-1bd0e664a1ad-serving-cert\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591787 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xpc\" (UniqueName: \"kubernetes.io/projected/e4412c4f-3b57-428e-8257-1bd0e664a1ad-kube-api-access-x6xpc\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591848 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-client-ca\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.603336 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h"] Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.692450 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.692890 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693029 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693225 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693501 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4412c4f-3b57-428e-8257-1bd0e664a1ad-serving-cert\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693613 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config" (OuterVolumeSpecName: "config") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693753 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xpc\" (UniqueName: \"kubernetes.io/projected/e4412c4f-3b57-428e-8257-1bd0e664a1ad-kube-api-access-x6xpc\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693879 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-client-ca\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.694062 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-config\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.694198 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693548 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.695389 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-config\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.699915 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-client-ca\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.700168 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.700257 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v" (OuterVolumeSpecName: "kube-api-access-vgk7v") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "kube-api-access-vgk7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.701869 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4412c4f-3b57-428e-8257-1bd0e664a1ad-serving-cert\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.711409 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xpc\" (UniqueName: \"kubernetes.io/projected/e4412c4f-3b57-428e-8257-1bd0e664a1ad-kube-api-access-x6xpc\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.780198 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerDied","Data":"d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833"} Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.780265 4913 scope.go:117] "RemoveContainer" containerID="3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.780261 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.795242 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.795270 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.795284 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.810295 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.813314 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.901093 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.352251 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h"] Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.790236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" event={"ID":"e4412c4f-3b57-428e-8257-1bd0e664a1ad","Type":"ContainerStarted","Data":"e7fd15f7e91eb86d6fc8070c6e67476f7d90077f660a5091b97b7b70b42a386f"} Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.790287 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" event={"ID":"e4412c4f-3b57-428e-8257-1bd0e664a1ad","Type":"ContainerStarted","Data":"ee749b927d29b8c81909378c1ab9c428c847c05bd99d1c87b2d5a7695e8b627e"} Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.790630 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.823708 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" podStartSLOduration=2.823683858 podStartE2EDuration="2.823683858s" podCreationTimestamp="2026-01-21 06:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:41.821783295 +0000 UTC m=+331.618142998" watchObservedRunningTime="2026-01-21 06:40:41.823683858 +0000 UTC m=+331.620043571" Jan 21 06:40:42 crc kubenswrapper[4913]: I0121 06:40:42.054687 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:42 crc kubenswrapper[4913]: I0121 06:40:42.539479 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" path="/var/lib/kubelet/pods/5b31176e-1ac2-453f-8750-e2524da5cb9b/volumes" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.295251 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rp8wd"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.297471 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.301986 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.311039 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp8wd"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.440894 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cz6m\" (UniqueName: \"kubernetes.io/projected/a8ba24ca-c946-4684-817a-0ae5bada3ecd-kube-api-access-6cz6m\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.440960 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-catalog-content\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.441245 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-utilities\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.541743 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-catalog-content\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.541816 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-utilities\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.542249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-catalog-content\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.542603 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cz6m\" (UniqueName: \"kubernetes.io/projected/a8ba24ca-c946-4684-817a-0ae5bada3ecd-kube-api-access-6cz6m\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.542850 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-utilities\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.574813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cz6m\" (UniqueName: \"kubernetes.io/projected/a8ba24ca-c946-4684-817a-0ae5bada3ecd-kube-api-access-6cz6m\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.613102 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.889128 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmk45"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.892362 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.902027 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.908391 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmk45"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.949572 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-catalog-content\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.949741 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-utilities\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.949798 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bsl\" (UniqueName: \"kubernetes.io/projected/e81adc58-27d6-4087-9902-6e61aba9bfaa-kube-api-access-r7bsl\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.050819 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-utilities\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.050879 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bsl\" (UniqueName: \"kubernetes.io/projected/e81adc58-27d6-4087-9902-6e61aba9bfaa-kube-api-access-r7bsl\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.050984 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-catalog-content\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.052040 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-utilities\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.052052 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-catalog-content\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.074401 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bsl\" (UniqueName: \"kubernetes.io/projected/e81adc58-27d6-4087-9902-6e61aba9bfaa-kube-api-access-r7bsl\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.086382 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp8wd"] Jan 21 06:40:45 crc kubenswrapper[4913]: W0121 06:40:45.097135 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ba24ca_c946_4684_817a_0ae5bada3ecd.slice/crio-da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f WatchSource:0}: Error finding container da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f: Status 404 returned error can't find the container with id da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.263355 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.695832 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmk45"] Jan 21 06:40:45 crc kubenswrapper[4913]: W0121 06:40:45.697963 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode81adc58_27d6_4087_9902_6e61aba9bfaa.slice/crio-0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b WatchSource:0}: Error finding container 0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b: Status 404 returned error can't find the container with id 0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.818397 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerStarted","Data":"0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b"} Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.822954 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8ba24ca-c946-4684-817a-0ae5bada3ecd" containerID="45e52ee2e29f70b7bb59c5aa4d34f2597d2529e6e5b2e2f0d9e05a4a1f9611ce" exitCode=0 Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.823021 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerDied","Data":"45e52ee2e29f70b7bb59c5aa4d34f2597d2529e6e5b2e2f0d9e05a4a1f9611ce"} Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.823066 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerStarted","Data":"da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f"} Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.688050 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-frpd4"] Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.691025 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.693495 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.708988 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-frpd4"] Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.834113 4913 generic.go:334] "Generic (PLEG): container finished" podID="e81adc58-27d6-4087-9902-6e61aba9bfaa" containerID="a52fe07e241638d9c176b96b1b1c8da237c3f0bbcfb011fc95b9f09991cda53f" exitCode=0 Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.834181 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerDied","Data":"a52fe07e241638d9c176b96b1b1c8da237c3f0bbcfb011fc95b9f09991cda53f"} Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.841542 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8ba24ca-c946-4684-817a-0ae5bada3ecd" containerID="4f2d57dc6e7c27c4341c60afc83885041a3478ddc89612b905dbfc2303bb30c1" exitCode=0 Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.841725 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerDied","Data":"4f2d57dc6e7c27c4341c60afc83885041a3478ddc89612b905dbfc2303bb30c1"} Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.876551 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-utilities\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.876677 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxps\" (UniqueName: \"kubernetes.io/projected/b6d83360-7a65-47b3-98df-42902962da8d-kube-api-access-4wxps\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.876718 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-catalog-content\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.977777 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-utilities\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.977933 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxps\" (UniqueName: \"kubernetes.io/projected/b6d83360-7a65-47b3-98df-42902962da8d-kube-api-access-4wxps\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.978447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-catalog-content\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.979192 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-catalog-content\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.979727 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-utilities\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.013422 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxps\" (UniqueName: \"kubernetes.io/projected/b6d83360-7a65-47b3-98df-42902962da8d-kube-api-access-4wxps\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.287442 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pp6lf"] Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.290406 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.294689 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.307525 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.312185 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pp6lf"] Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.486510 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvb79\" (UniqueName: \"kubernetes.io/projected/4fd9a0ea-0344-4e90-87f0-34a568804f80-kube-api-access-dvb79\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.487038 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-catalog-content\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.487152 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-utilities\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.589730 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvb79\" (UniqueName: \"kubernetes.io/projected/4fd9a0ea-0344-4e90-87f0-34a568804f80-kube-api-access-dvb79\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.589843 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-catalog-content\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.589925 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-utilities\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.591691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-utilities\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.591881 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-catalog-content\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.621114 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvb79\" (UniqueName: \"kubernetes.io/projected/4fd9a0ea-0344-4e90-87f0-34a568804f80-kube-api-access-dvb79\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.632000 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.848237 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerStarted","Data":"9945b48a1782babeba1f207628c5a2bceb7366fc4e50cda0064dba34b9b61e3e"} Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.850462 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerStarted","Data":"8e0561a75daf86567b4a196af0f68d6f75bf5f9b1c147cb955c7facc99271e17"} Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.889449 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-frpd4"] Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.904252 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rp8wd" podStartSLOduration=2.469681473 podStartE2EDuration="3.904236835s" podCreationTimestamp="2026-01-21 06:40:44 +0000 UTC" firstStartedPulling="2026-01-21 06:40:45.825150152 +0000 UTC m=+335.621509855" lastFinishedPulling="2026-01-21 06:40:47.259705504 +0000 UTC m=+337.056065217" observedRunningTime="2026-01-21 06:40:47.903724221 +0000 UTC m=+337.700083914" watchObservedRunningTime="2026-01-21 06:40:47.904236835 +0000 UTC m=+337.700596508" Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.060165 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pp6lf"] Jan 21 06:40:48 crc kubenswrapper[4913]: W0121 06:40:48.096461 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd9a0ea_0344_4e90_87f0_34a568804f80.slice/crio-f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25 WatchSource:0}: Error finding container f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25: Status 404 returned error can't find the container with id f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.857066 4913 generic.go:334] "Generic (PLEG): container finished" podID="4fd9a0ea-0344-4e90-87f0-34a568804f80" containerID="c1c9628ec256da7c4bd3c2b80145eaa4ffc63343ceeb07fffbacebe0261eaae8" exitCode=0 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.857165 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerDied","Data":"c1c9628ec256da7c4bd3c2b80145eaa4ffc63343ceeb07fffbacebe0261eaae8"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.857395 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerStarted","Data":"f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.859327 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d83360-7a65-47b3-98df-42902962da8d" containerID="667334d76a9d7bd122e195a230ff4f34811cd491a6beca89ebe519edbbf4892e" exitCode=0 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.859358 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerDied","Data":"667334d76a9d7bd122e195a230ff4f34811cd491a6beca89ebe519edbbf4892e"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.859390 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerStarted","Data":"d1b20acf8549a3b7da61ace37541771a495849a99a5c19d6aa33d057d8407554"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.863524 4913 generic.go:334] "Generic (PLEG): container finished" podID="e81adc58-27d6-4087-9902-6e61aba9bfaa" containerID="9945b48a1782babeba1f207628c5a2bceb7366fc4e50cda0064dba34b9b61e3e" exitCode=0 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.863580 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerDied","Data":"9945b48a1782babeba1f207628c5a2bceb7366fc4e50cda0064dba34b9b61e3e"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.871384 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerStarted","Data":"020d57f9dcefc7a806ce9c2910a2cbf1097007d57f04c2232488c6d3125f10b9"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.874295 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerStarted","Data":"fcacc05ede6e8fafd05a06a94622a0def2b5d20848fe6d676a116c77b796bd4c"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.876187 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerStarted","Data":"ac586c97f3e4e17cb3c0a20832d1b157caadc90b3ccbb578d1e71b7770546a8a"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.920131 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmk45" podStartSLOduration=3.410830549 podStartE2EDuration="5.920114283s" podCreationTimestamp="2026-01-21 06:40:44 +0000 UTC" firstStartedPulling="2026-01-21 06:40:46.83665235 +0000 UTC m=+336.633012023" lastFinishedPulling="2026-01-21 06:40:49.345936064 +0000 UTC m=+339.142295757" observedRunningTime="2026-01-21 06:40:49.917335116 +0000 UTC m=+339.713694799" watchObservedRunningTime="2026-01-21 06:40:49.920114283 +0000 UTC m=+339.716473956" Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.886715 4913 generic.go:334] "Generic (PLEG): container finished" podID="4fd9a0ea-0344-4e90-87f0-34a568804f80" containerID="ac586c97f3e4e17cb3c0a20832d1b157caadc90b3ccbb578d1e71b7770546a8a" exitCode=0 Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.886845 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerDied","Data":"ac586c97f3e4e17cb3c0a20832d1b157caadc90b3ccbb578d1e71b7770546a8a"} Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.890786 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d83360-7a65-47b3-98df-42902962da8d" containerID="020d57f9dcefc7a806ce9c2910a2cbf1097007d57f04c2232488c6d3125f10b9" exitCode=0 Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.890933 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerDied","Data":"020d57f9dcefc7a806ce9c2910a2cbf1097007d57f04c2232488c6d3125f10b9"} Jan 21 06:40:51 crc kubenswrapper[4913]: I0121 06:40:51.903137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerStarted","Data":"bc9f718296dd53b25974786ace8bf59b36fd595a623e7bbabfbb20de0b41f833"} Jan 21 06:40:51 crc kubenswrapper[4913]: I0121 06:40:51.932012 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pp6lf" podStartSLOduration=2.483261757 podStartE2EDuration="4.9319895s" podCreationTimestamp="2026-01-21 06:40:47 +0000 UTC" firstStartedPulling="2026-01-21 06:40:48.859377048 +0000 UTC m=+338.655736771" lastFinishedPulling="2026-01-21 06:40:51.308104801 +0000 UTC m=+341.104464514" observedRunningTime="2026-01-21 06:40:51.931243648 +0000 UTC m=+341.727603351" watchObservedRunningTime="2026-01-21 06:40:51.9319895 +0000 UTC m=+341.728349213" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.613453 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.613733 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.673252 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.922536 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerStarted","Data":"05e87eff78c4ffb20e0b35bfc633025ab1fc926ebaf65ee4c40e216e1b85ce5e"} Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.942092 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-frpd4" podStartSLOduration=3.781829863 podStartE2EDuration="8.942075734s" podCreationTimestamp="2026-01-21 06:40:46 +0000 UTC" firstStartedPulling="2026-01-21 06:40:48.860422637 +0000 UTC m=+338.656782310" lastFinishedPulling="2026-01-21 06:40:54.020668508 +0000 UTC m=+343.817028181" observedRunningTime="2026-01-21 06:40:54.942020833 +0000 UTC m=+344.738380506" watchObservedRunningTime="2026-01-21 06:40:54.942075734 +0000 UTC m=+344.738435407" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.974261 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.264662 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.264996 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.322787 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.986137 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.307949 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.308672 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.352752 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.632298 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.632366 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:58 crc kubenswrapper[4913]: I0121 06:40:58.674523 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pp6lf" podUID="4fd9a0ea-0344-4e90-87f0-34a568804f80" containerName="registry-server" probeResult="failure" output=< Jan 21 06:40:58 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:40:58 crc kubenswrapper[4913]: > Jan 21 06:41:07 crc kubenswrapper[4913]: I0121 06:41:07.371437 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:41:07 crc kubenswrapper[4913]: I0121 06:41:07.707551 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:41:07 crc kubenswrapper[4913]: I0121 06:41:07.781085 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:41:08 crc kubenswrapper[4913]: I0121 06:41:08.319382 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:41:08 crc kubenswrapper[4913]: I0121 06:41:08.319781 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:41:19 crc kubenswrapper[4913]: I0121 06:41:19.390809 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:41:19 crc kubenswrapper[4913]: I0121 06:41:19.392562 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" containerID="cri-o://f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240" gracePeriod=30 Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.106536 4913 generic.go:334] "Generic (PLEG): container finished" podID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerID="f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240" exitCode=0 Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.106648 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerDied","Data":"f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240"} Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.341627 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512189 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512251 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512278 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512368 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512985 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.513000 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca" (OuterVolumeSpecName: "client-ca") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.513413 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config" (OuterVolumeSpecName: "config") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.519000 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn" (OuterVolumeSpecName: "kube-api-access-bn8zn") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "kube-api-access-bn8zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.519847 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613658 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613698 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613721 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613735 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613750 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.614962 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8"] Jan 21 06:41:20 crc kubenswrapper[4913]: E0121 06:41:20.615317 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.615344 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.615510 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.616099 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.619703 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8"] Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.714630 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cccfc082-b6a7-4769-aea5-9fe750c1f724-serving-cert\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.714872 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/cccfc082-b6a7-4769-aea5-9fe750c1f724-kube-api-access-kxghl\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.714897 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-proxy-ca-bundles\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.715018 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-config\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.715069 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-client-ca\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816618 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cccfc082-b6a7-4769-aea5-9fe750c1f724-serving-cert\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816664 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/cccfc082-b6a7-4769-aea5-9fe750c1f724-kube-api-access-kxghl\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816683 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-proxy-ca-bundles\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816711 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-config\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816741 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-client-ca\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.817472 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-client-ca\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.818820 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-proxy-ca-bundles\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.820117 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-config\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.824966 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cccfc082-b6a7-4769-aea5-9fe750c1f724-serving-cert\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.837341 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/cccfc082-b6a7-4769-aea5-9fe750c1f724-kube-api-access-kxghl\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.943455 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.114044 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerDied","Data":"31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5"} Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.114114 4913 scope.go:117] "RemoveContainer" containerID="f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240" Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.114117 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.128257 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8"] Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.145408 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.157487 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.129585 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" event={"ID":"cccfc082-b6a7-4769-aea5-9fe750c1f724","Type":"ContainerStarted","Data":"dfe2887aa4939906fe8164648933b1dbe6d39d785e088d17d90af9eb91805d11"} Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.130802 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.130867 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" event={"ID":"cccfc082-b6a7-4769-aea5-9fe750c1f724","Type":"ContainerStarted","Data":"49ea9acf3d77cd00bcf44e38b92df7e751dba25ec69a636302dce0b2f1a5f3e1"} Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.137356 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.152821 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" podStartSLOduration=3.152791052 podStartE2EDuration="3.152791052s" podCreationTimestamp="2026-01-21 06:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:41:22.149997514 +0000 UTC m=+371.946357217" watchObservedRunningTime="2026-01-21 06:41:22.152791052 +0000 UTC m=+371.949150725" Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.553806 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" path="/var/lib/kubelet/pods/799094d2-a718-4044-b16d-8a011cc3ecaa/volumes" Jan 21 06:41:28 crc kubenswrapper[4913]: I0121 06:41:28.894556 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-npfqd"] Jan 21 06:41:28 crc kubenswrapper[4913]: I0121 06:41:28.895941 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:28 crc kubenswrapper[4913]: I0121 06:41:28.911801 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-npfqd"] Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.037910 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eb761e-bfd8-435e-b2d4-e269953c3140-ca-trust-extracted\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.037959 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-trusted-ca\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.037977 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-certificates\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038001 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-bound-sa-token\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038071 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eb761e-bfd8-435e-b2d4-e269953c3140-installation-pull-secrets\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038131 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8htm\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-kube-api-access-l8htm\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038170 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038230 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-tls\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.071054 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.139729 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-tls\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140118 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eb761e-bfd8-435e-b2d4-e269953c3140-ca-trust-extracted\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140149 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-trusted-ca\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140176 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-certificates\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140226 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-bound-sa-token\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140261 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eb761e-bfd8-435e-b2d4-e269953c3140-installation-pull-secrets\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140286 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8htm\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-kube-api-access-l8htm\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.142980 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eb761e-bfd8-435e-b2d4-e269953c3140-ca-trust-extracted\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.143113 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-trusted-ca\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.146424 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-certificates\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.154555 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eb761e-bfd8-435e-b2d4-e269953c3140-installation-pull-secrets\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.156848 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-tls\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.158935 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-bound-sa-token\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.161571 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8htm\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-kube-api-access-l8htm\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.552901 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.978346 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-npfqd"] Jan 21 06:41:30 crc kubenswrapper[4913]: I0121 06:41:30.182444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" event={"ID":"73eb761e-bfd8-435e-b2d4-e269953c3140","Type":"ContainerStarted","Data":"660733956d52888c6da1b62bdf40d451311e4fe122454da02e4123731cd0a198"} Jan 21 06:41:31 crc kubenswrapper[4913]: I0121 06:41:31.189954 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" event={"ID":"73eb761e-bfd8-435e-b2d4-e269953c3140","Type":"ContainerStarted","Data":"e094d0e5cf925a7295919c4da401cf1aaf5c48fa7374e1c76275b063a6c4c536"} Jan 21 06:41:31 crc kubenswrapper[4913]: I0121 06:41:31.190691 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:31 crc kubenswrapper[4913]: I0121 06:41:31.212875 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" podStartSLOduration=3.212854165 podStartE2EDuration="3.212854165s" podCreationTimestamp="2026-01-21 06:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:41:31.211156298 +0000 UTC m=+381.007515981" watchObservedRunningTime="2026-01-21 06:41:31.212854165 +0000 UTC m=+381.009213858" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.319343 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.320119 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.320193 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.321048 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.321135 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3" gracePeriod=600 Jan 21 06:41:39 crc kubenswrapper[4913]: I0121 06:41:39.250254 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3" exitCode=0 Jan 21 06:41:39 crc kubenswrapper[4913]: I0121 06:41:39.250365 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3"} Jan 21 06:41:39 crc kubenswrapper[4913]: I0121 06:41:39.251034 4913 scope.go:117] "RemoveContainer" containerID="d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355" Jan 21 06:41:40 crc kubenswrapper[4913]: I0121 06:41:40.258751 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832"} Jan 21 06:41:49 crc kubenswrapper[4913]: I0121 06:41:49.558511 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:49 crc kubenswrapper[4913]: I0121 06:41:49.613538 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:42:14 crc kubenswrapper[4913]: I0121 06:42:14.654883 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" containerID="cri-o://7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" gracePeriod=30 Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.079848 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171434 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171505 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171534 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171626 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171656 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171710 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171889 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171927 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.172688 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.173416 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.173809 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.173878 4913 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.178260 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.178967 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.179635 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w" (OuterVolumeSpecName: "kube-api-access-kwm6w") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "kube-api-access-kwm6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.179658 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.185682 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.197093 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.275288 4913 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.275951 4913 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.276003 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.276033 4913 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.276049 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496043 4913 generic.go:334] "Generic (PLEG): container finished" podID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" exitCode=0 Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496126 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerDied","Data":"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4"} Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496497 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerDied","Data":"608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5"} Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496533 4913 scope.go:117] "RemoveContainer" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496148 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.520316 4913 scope.go:117] "RemoveContainer" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" Jan 21 06:42:15 crc kubenswrapper[4913]: E0121 06:42:15.520883 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4\": container with ID starting with 7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4 not found: ID does not exist" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.520921 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4"} err="failed to get container status \"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4\": rpc error: code = NotFound desc = could not find container \"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4\": container with ID starting with 7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4 not found: ID does not exist" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.545262 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.556208 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:42:16 crc kubenswrapper[4913]: I0121 06:42:16.534990 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" path="/var/lib/kubelet/pods/f46fd64f-46cb-4464-8f26-6df55bf77ba1/volumes" Jan 21 06:44:08 crc kubenswrapper[4913]: I0121 06:44:08.319267 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:44:08 crc kubenswrapper[4913]: I0121 06:44:08.320026 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:44:38 crc kubenswrapper[4913]: I0121 06:44:38.319420 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:44:38 crc kubenswrapper[4913]: I0121 06:44:38.320216 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.196579 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn"] Jan 21 06:45:00 crc kubenswrapper[4913]: E0121 06:45:00.197307 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.197321 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.197445 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.197890 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.201342 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.201859 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.211716 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.212088 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.212227 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.226311 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn"] Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.312877 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.313059 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.313105 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.314743 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.331134 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.343276 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.536260 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.807067 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn"] Jan 21 06:45:01 crc kubenswrapper[4913]: I0121 06:45:01.638306 4913 generic.go:334] "Generic (PLEG): container finished" podID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerID="4f2dab9db916ff8d0519c5eaa81d06419025976d84bd653776e58f5e8a4c59bf" exitCode=0 Jan 21 06:45:01 crc kubenswrapper[4913]: I0121 06:45:01.638412 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" event={"ID":"59238b39-5be3-4531-9d3d-7d3b89d2c394","Type":"ContainerDied","Data":"4f2dab9db916ff8d0519c5eaa81d06419025976d84bd653776e58f5e8a4c59bf"} Jan 21 06:45:01 crc kubenswrapper[4913]: I0121 06:45:01.638692 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" event={"ID":"59238b39-5be3-4531-9d3d-7d3b89d2c394","Type":"ContainerStarted","Data":"6b5a51c420191e3d068d4e126a9193361f47d211f512c80ee9eeaffdaea068af"} Jan 21 06:45:02 crc kubenswrapper[4913]: I0121 06:45:02.946378 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.145334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"59238b39-5be3-4531-9d3d-7d3b89d2c394\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.145435 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"59238b39-5be3-4531-9d3d-7d3b89d2c394\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.145515 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"59238b39-5be3-4531-9d3d-7d3b89d2c394\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.146915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume" (OuterVolumeSpecName: "config-volume") pod "59238b39-5be3-4531-9d3d-7d3b89d2c394" (UID: "59238b39-5be3-4531-9d3d-7d3b89d2c394"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.152317 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59238b39-5be3-4531-9d3d-7d3b89d2c394" (UID: "59238b39-5be3-4531-9d3d-7d3b89d2c394"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.152882 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc" (OuterVolumeSpecName: "kube-api-access-9c9wc") pod "59238b39-5be3-4531-9d3d-7d3b89d2c394" (UID: "59238b39-5be3-4531-9d3d-7d3b89d2c394"). InnerVolumeSpecName "kube-api-access-9c9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.247901 4913 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.247957 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.247976 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.656691 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" event={"ID":"59238b39-5be3-4531-9d3d-7d3b89d2c394","Type":"ContainerDied","Data":"6b5a51c420191e3d068d4e126a9193361f47d211f512c80ee9eeaffdaea068af"} Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.656767 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5a51c420191e3d068d4e126a9193361f47d211f512c80ee9eeaffdaea068af" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.656892 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.319532 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.319976 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.320037 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.320911 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.320995 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832" gracePeriod=600 Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695513 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832" exitCode=0 Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695575 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832"} Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695905 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3"} Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695934 4913 scope.go:117] "RemoveContainer" containerID="a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3" Jan 21 06:45:10 crc kubenswrapper[4913]: I0121 06:45:10.737083 4913 scope.go:117] "RemoveContainer" containerID="43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb" Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.947854 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952467 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" containerID="cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952842 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" containerID="cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952879 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" containerID="cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952909 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" containerID="cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.953163 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.953193 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" containerID="cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.953225 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" containerID="cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" gracePeriod=30 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.004266 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" containerID="cri-o://e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" gracePeriod=30 Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.077300 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.077820 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.078439 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.078477 4913 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.079853 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.081467 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.082369 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.082403 4913 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.225452 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.227983 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-acl-logging/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.228460 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-controller/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.229021 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278137 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v5kmt"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278318 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278329 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278339 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kubecfg-setup" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278344 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kubecfg-setup" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278353 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278360 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278369 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278376 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278389 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278396 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278402 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerName="collect-profiles" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278409 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerName="collect-profiles" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278417 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278423 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278432 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278438 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278444 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278450 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278458 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278463 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278470 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278475 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278481 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278488 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278565 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278574 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278583 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278640 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278648 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278655 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278662 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278670 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278676 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerName="collect-profiles" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278685 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278695 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278704 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278789 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278796 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278884 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278962 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278969 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.281204 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.400701 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.400827 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.400930 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401041 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401083 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401117 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401230 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401301 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401343 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401362 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401394 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401413 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401433 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401430 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401454 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401475 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401473 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401533 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401628 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log" (OuterVolumeSpecName: "node-log") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401649 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401964 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401656 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401908 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402053 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402073 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402114 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket" (OuterVolumeSpecName: "log-socket") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402140 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402200 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402212 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402174 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402303 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402331 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402372 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402468 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402561 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash" (OuterVolumeSpecName: "host-slash") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402575 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-netd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402583 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402658 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-script-lib\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402687 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c639508-4011-41af-8cb2-17be3ad6062c-ovn-node-metrics-cert\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402722 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-etc-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402742 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-env-overrides\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402768 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-kubelet\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402833 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-config\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402873 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-netns\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402909 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402941 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbj4\" (UniqueName: \"kubernetes.io/projected/0c639508-4011-41af-8cb2-17be3ad6062c-kube-api-access-zgbj4\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402962 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-systemd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403008 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-node-log\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403036 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-systemd-units\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403068 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-ovn\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403099 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403167 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403283 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-slash\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403312 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-bin\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403344 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-log-socket\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403368 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-var-lib-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403421 4913 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403438 4913 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403452 4913 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403466 4913 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403477 4913 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403488 4913 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403500 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403511 4913 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403522 4913 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403534 4913 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403544 4913 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403558 4913 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403570 4913 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403582 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403612 4913 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403625 4913 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403637 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.407095 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229" (OuterVolumeSpecName: "kube-api-access-j8229") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "kube-api-access-j8229". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.407167 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.414262 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505492 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505549 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-systemd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505574 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbj4\" (UniqueName: \"kubernetes.io/projected/0c639508-4011-41af-8cb2-17be3ad6062c-kube-api-access-zgbj4\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505644 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-node-log\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505670 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-systemd-units\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505692 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-ovn\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505715 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505740 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505767 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-slash\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-bin\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505811 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-var-lib-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505829 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-log-socket\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505854 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-netd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505872 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-script-lib\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505891 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c639508-4011-41af-8cb2-17be3ad6062c-ovn-node-metrics-cert\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505916 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-kubelet\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505935 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-etc-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505952 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-env-overrides\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505988 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-config\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506010 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-netns\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506063 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506074 4913 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506086 4913 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506132 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-netns\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506175 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506201 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-systemd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506554 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-node-log\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506612 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-systemd-units\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506644 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-ovn\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506672 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506705 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506732 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-slash\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506757 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-bin\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506783 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-var-lib-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506812 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-log-socket\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506838 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-netd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.507576 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-script-lib\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.510813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c639508-4011-41af-8cb2-17be3ad6062c-ovn-node-metrics-cert\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.510897 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-kubelet\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.510939 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-etc-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.511406 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-env-overrides\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.512171 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-config\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.523979 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbj4\" (UniqueName: \"kubernetes.io/projected/0c639508-4011-41af-8cb2-17be3ad6062c-kube-api-access-zgbj4\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.600378 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.813908 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.816338 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-acl-logging/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.816900 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-controller/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817278 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817308 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817318 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817328 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817337 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817345 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817352 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" exitCode=143 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817361 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" exitCode=143 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817394 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817431 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817446 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817459 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817473 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817491 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817503 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817516 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817523 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817530 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817539 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817547 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817554 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817560 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817567 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817505 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817576 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817536 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817797 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818356 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818370 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818384 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818393 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818403 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818413 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818422 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818432 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818441 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818820 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818862 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818875 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818886 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818897 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818907 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818918 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818928 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818938 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818947 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818956 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819243 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819273 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819286 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819298 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819309 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819319 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819329 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819339 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819348 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819359 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819370 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819489 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerDied","Data":"6d3afdae4860943b42d4b2b72247b6b2eecab2dd6155e56e7e96c02f450df3f7"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819355 4913 generic.go:334] "Generic (PLEG): container finished" podID="0c639508-4011-41af-8cb2-17be3ad6062c" containerID="6d3afdae4860943b42d4b2b72247b6b2eecab2dd6155e56e7e96c02f450df3f7" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819630 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"0c4a2914541f79ea25f1ce921d3b7652030d36cfe969ec4d4fe7617ec7d1ed27"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.821674 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822453 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822517 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" exitCode=2 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822553 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerDied","Data":"35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822577 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.823494 4913 scope.go:117] "RemoveContainer" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.824023 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gn6lz_openshift-multus(b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf)\"" pod="openshift-multus/multus-gn6lz" podUID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.846998 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.879246 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.907396 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.914414 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.921801 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.937888 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.957497 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.973307 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.023783 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.039773 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.056129 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.073557 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.074040 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074082 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074108 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.074439 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074468 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074487 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.075562 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075613 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075633 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.075865 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075896 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075913 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076092 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076118 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076134 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076301 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076326 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076342 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076488 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076509 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076523 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076675 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076698 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076716 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076904 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076931 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076946 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.077181 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077211 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077227 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077486 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077526 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077964 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077990 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078187 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078221 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078460 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078479 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078807 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078835 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079043 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079067 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079264 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079288 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079735 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079756 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.080783 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.080811 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081109 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081129 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081377 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081430 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081727 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081748 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081985 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082003 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082194 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082210 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082439 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082457 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082727 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082758 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083010 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083032 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083235 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083252 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083602 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083621 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083876 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083895 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084176 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084195 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084897 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084925 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085176 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085198 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085654 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085673 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086021 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086039 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086381 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086401 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086723 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086741 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086963 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086979 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087221 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087238 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087523 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087540 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087862 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.545404 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" path="/var/lib/kubelet/pods/afe1e161-7227-48ff-824e-01d26e5c7218/volumes" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832648 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"9040624e31a6f981c383ebce3506cd50ea61698985e6cca91056b0a30aa01438"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832733 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"d8fa4c98d5ac3834bd30f130843d573cad0414f319bde25e3962962e22b9f5e9"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832761 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"84e4a4905f19f321b3ef43e2e792ad413e168c773a295bb5bc138227d9970d6b"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832785 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"7d7956d8f9537bbf32a6f2a8891c8dc27703946d987485c0cce511e12adb6ede"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832809 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"1f919245e12fcc4250d0bc7562028125d0807571ff382b096b28abff3c2c6597"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832832 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"c0a637daf2c1b4c12155b2e87b075cd948deaad0e4fdc7645135fcf0ebf742e6"} Jan 21 06:45:33 crc kubenswrapper[4913]: I0121 06:45:33.855799 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"3ab535b5da63a8a36227de3f62ca5ebf05226eb66b99fdb16430fc6d602cc89a"} Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.868332 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"6bef4975678b8df7e304e0c0fc594b605abc6867733664f5e484d3f7bee035ba"} Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.869080 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.869145 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.869250 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.895781 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.895880 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" podStartSLOduration=6.895861772 podStartE2EDuration="6.895861772s" podCreationTimestamp="2026-01-21 06:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:45:35.89434434 +0000 UTC m=+625.690704023" watchObservedRunningTime="2026-01-21 06:45:35.895861772 +0000 UTC m=+625.692221445" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.896116 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:45 crc kubenswrapper[4913]: I0121 06:45:45.526916 4913 scope.go:117] "RemoveContainer" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" Jan 21 06:45:45 crc kubenswrapper[4913]: E0121 06:45:45.528102 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gn6lz_openshift-multus(b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf)\"" pod="openshift-multus/multus-gn6lz" podUID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.324477 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg"] Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.327180 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.330303 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.338299 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg"] Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.483214 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.483296 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.483391 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.526840 4913 scope.go:117] "RemoveContainer" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.584299 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.584650 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.584709 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.585202 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.585983 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.614394 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.642867 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685145 4913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685235 4913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685258 4913 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685315 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.039644 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040164 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040231 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"18d40612f6a3d6a699298b285688f6c7574c39b9f576da43152e3f7778531a36"} Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040246 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040755 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062144 4913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062218 4913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062246 4913 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062293 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.626778 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:46:10 crc kubenswrapper[4913]: I0121 06:46:10.788278 4913 scope.go:117] "RemoveContainer" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" Jan 21 06:46:12 crc kubenswrapper[4913]: I0121 06:46:12.128539 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:46:13 crc kubenswrapper[4913]: I0121 06:46:13.525804 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:13 crc kubenswrapper[4913]: I0121 06:46:13.526408 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:14 crc kubenswrapper[4913]: I0121 06:46:14.015340 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg"] Jan 21 06:46:14 crc kubenswrapper[4913]: I0121 06:46:14.144804 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerStarted","Data":"6bb2f0038e20a1e277f2b5add84a0b8c859fc34895666d4599568b9186f8fcb2"} Jan 21 06:46:15 crc kubenswrapper[4913]: I0121 06:46:15.153372 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerID="1fb95c7ac4eac0eb6167e3ebacdca54c3011f69b114dec155978458b577f1bdc" exitCode=0 Jan 21 06:46:15 crc kubenswrapper[4913]: I0121 06:46:15.153496 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"1fb95c7ac4eac0eb6167e3ebacdca54c3011f69b114dec155978458b577f1bdc"} Jan 21 06:46:15 crc kubenswrapper[4913]: I0121 06:46:15.155227 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:46:17 crc kubenswrapper[4913]: I0121 06:46:17.169370 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerID="030083572c4fca58b7011028c8a3a63e0e0fb2bd13336495217780d023d18a12" exitCode=0 Jan 21 06:46:17 crc kubenswrapper[4913]: I0121 06:46:17.169489 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"030083572c4fca58b7011028c8a3a63e0e0fb2bd13336495217780d023d18a12"} Jan 21 06:46:18 crc kubenswrapper[4913]: I0121 06:46:18.180185 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerID="707eec87cf656b6427ef516b433aadd6cc2ae8aa4a9a1c826213c56a11f82258" exitCode=0 Jan 21 06:46:18 crc kubenswrapper[4913]: I0121 06:46:18.180252 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"707eec87cf656b6427ef516b433aadd6cc2ae8aa4a9a1c826213c56a11f82258"} Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.467942 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.485238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.485284 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.485359 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.486373 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle" (OuterVolumeSpecName: "bundle") pod "6bd2ad61-8bab-42d9-a09c-cf48255cc25c" (UID: "6bd2ad61-8bab-42d9-a09c-cf48255cc25c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.491199 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl" (OuterVolumeSpecName: "kube-api-access-w69rl") pod "6bd2ad61-8bab-42d9-a09c-cf48255cc25c" (UID: "6bd2ad61-8bab-42d9-a09c-cf48255cc25c"). InnerVolumeSpecName "kube-api-access-w69rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.499605 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util" (OuterVolumeSpecName: "util") pod "6bd2ad61-8bab-42d9-a09c-cf48255cc25c" (UID: "6bd2ad61-8bab-42d9-a09c-cf48255cc25c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.586353 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") on node \"crc\" DevicePath \"\"" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.586383 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.586393 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:46:20 crc kubenswrapper[4913]: I0121 06:46:20.194807 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"6bb2f0038e20a1e277f2b5add84a0b8c859fc34895666d4599568b9186f8fcb2"} Jan 21 06:46:20 crc kubenswrapper[4913]: I0121 06:46:20.194903 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb2f0038e20a1e277f2b5add84a0b8c859fc34895666d4599568b9186f8fcb2" Jan 21 06:46:20 crc kubenswrapper[4913]: I0121 06:46:20.194865 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.594407 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r"] Jan 21 06:46:31 crc kubenswrapper[4913]: E0121 06:46:31.595284 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="util" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595301 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="util" Jan 21 06:46:31 crc kubenswrapper[4913]: E0121 06:46:31.595323 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="pull" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595331 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="pull" Jan 21 06:46:31 crc kubenswrapper[4913]: E0121 06:46:31.595353 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="extract" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595360 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="extract" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595466 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="extract" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595965 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.597624 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.598290 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.598553 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.598651 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.599836 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ch447" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.616181 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r"] Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.640783 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-apiservice-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.640850 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-webhook-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.640879 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5h6\" (UniqueName: \"kubernetes.io/projected/e89d9462-a010-4873-9a7a-ff85114b35f9-kube-api-access-bp5h6\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.741374 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-apiservice-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.741455 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-webhook-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.741474 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5h6\" (UniqueName: \"kubernetes.io/projected/e89d9462-a010-4873-9a7a-ff85114b35f9-kube-api-access-bp5h6\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.748171 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-webhook-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.751434 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-apiservice-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.767375 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5h6\" (UniqueName: \"kubernetes.io/projected/e89d9462-a010-4873-9a7a-ff85114b35f9-kube-api-access-bp5h6\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.912229 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.916195 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9"] Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.916930 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.918663 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.918882 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.919004 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-j4q7v" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.926172 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9"] Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.045217 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-webhook-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.045305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-apiservice-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.045401 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswkw\" (UniqueName: \"kubernetes.io/projected/09278577-df56-4906-b822-79df291100ae-kube-api-access-lswkw\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.146645 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-webhook-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.146982 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-apiservice-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.147007 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswkw\" (UniqueName: \"kubernetes.io/projected/09278577-df56-4906-b822-79df291100ae-kube-api-access-lswkw\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.151253 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-apiservice-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.163637 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-webhook-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.165262 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswkw\" (UniqueName: \"kubernetes.io/projected/09278577-df56-4906-b822-79df291100ae-kube-api-access-lswkw\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.209250 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r"] Jan 21 06:46:32 crc kubenswrapper[4913]: W0121 06:46:32.214925 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode89d9462_a010_4873_9a7a_ff85114b35f9.slice/crio-b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b WatchSource:0}: Error finding container b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b: Status 404 returned error can't find the container with id b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.258392 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" event={"ID":"e89d9462-a010-4873-9a7a-ff85114b35f9","Type":"ContainerStarted","Data":"b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b"} Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.279250 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.468213 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9"] Jan 21 06:46:33 crc kubenswrapper[4913]: I0121 06:46:33.267150 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" event={"ID":"09278577-df56-4906-b822-79df291100ae","Type":"ContainerStarted","Data":"a6c36be76f6a795c1d11f0fce3f6feb92b020c6decec288a4f97d7a9d6afdf24"} Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.292950 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" event={"ID":"e89d9462-a010-4873-9a7a-ff85114b35f9","Type":"ContainerStarted","Data":"c10386d500b2efc304025d02efa7b67410b4849c9abee05e8bb31a6873a1d472"} Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.295396 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" event={"ID":"09278577-df56-4906-b822-79df291100ae","Type":"ContainerStarted","Data":"b658dd5e97e95a3396b54af713b38f3b115c1960238c95cd930c7b83616894f8"} Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.295635 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.315264 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" podStartSLOduration=2.202074755 podStartE2EDuration="7.315248874s" podCreationTimestamp="2026-01-21 06:46:31 +0000 UTC" firstStartedPulling="2026-01-21 06:46:32.216949662 +0000 UTC m=+682.013309335" lastFinishedPulling="2026-01-21 06:46:37.330123771 +0000 UTC m=+687.126483454" observedRunningTime="2026-01-21 06:46:38.311974614 +0000 UTC m=+688.108334287" watchObservedRunningTime="2026-01-21 06:46:38.315248874 +0000 UTC m=+688.111608547" Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.338453 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" podStartSLOduration=2.479985855 podStartE2EDuration="7.338433711s" podCreationTimestamp="2026-01-21 06:46:31 +0000 UTC" firstStartedPulling="2026-01-21 06:46:32.477557057 +0000 UTC m=+682.273916730" lastFinishedPulling="2026-01-21 06:46:37.336004913 +0000 UTC m=+687.132364586" observedRunningTime="2026-01-21 06:46:38.335067489 +0000 UTC m=+688.131427182" watchObservedRunningTime="2026-01-21 06:46:38.338433711 +0000 UTC m=+688.134793404" Jan 21 06:46:39 crc kubenswrapper[4913]: I0121 06:46:39.302864 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:52 crc kubenswrapper[4913]: I0121 06:46:52.287760 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:47:08 crc kubenswrapper[4913]: I0121 06:47:08.319350 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:47:08 crc kubenswrapper[4913]: I0121 06:47:08.319995 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:47:11 crc kubenswrapper[4913]: I0121 06:47:11.915404 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.834384 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zwvdk"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.836887 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839305 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839528 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2ml6m" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839825 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839887 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.840670 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.842114 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.857433 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.922456 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qpr6d"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.923256 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.927697 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.928035 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.928161 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nmnwm" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.929404 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-pc8gk"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.929408 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.946430 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:12.951369 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.006030 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pc8gk"] Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041150 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fda16a07-5908-4736-9835-a29ce1f85a7e-metallb-excludel2\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041214 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9tr\" (UniqueName: \"kubernetes.io/projected/fda16a07-5908-4736-9835-a29ce1f85a7e-kube-api-access-ww9tr\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041249 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f6668fc-0d01-4942-abbe-758690c86480-metrics-certs\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041274 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041299 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f6668fc-0d01-4942-abbe-758690c86480-frr-startup\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041319 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-conf\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041372 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-sockets\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041392 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjxb\" (UniqueName: \"kubernetes.io/projected/1f6668fc-0d01-4942-abbe-758690c86480-kube-api-access-shjxb\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041422 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a33768cf-18ec-4cec-94fb-303b0779eb59-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041455 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-metrics\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041478 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-reloader\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041499 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngvn\" (UniqueName: \"kubernetes.io/projected/a33768cf-18ec-4cec-94fb-303b0779eb59-kube-api-access-5ngvn\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041544 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-metrics-certs\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142160 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-cert\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142216 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fda16a07-5908-4736-9835-a29ce1f85a7e-metallb-excludel2\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142245 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9tr\" (UniqueName: \"kubernetes.io/projected/fda16a07-5908-4736-9835-a29ce1f85a7e-kube-api-access-ww9tr\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142271 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f6668fc-0d01-4942-abbe-758690c86480-metrics-certs\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142318 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142338 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f6668fc-0d01-4942-abbe-758690c86480-frr-startup\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142351 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-conf\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142374 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-sockets\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142387 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjxb\" (UniqueName: \"kubernetes.io/projected/1f6668fc-0d01-4942-abbe-758690c86480-kube-api-access-shjxb\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142404 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a33768cf-18ec-4cec-94fb-303b0779eb59-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142425 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-metrics-certs\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142441 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gszl\" (UniqueName: \"kubernetes.io/projected/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-kube-api-access-8gszl\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142460 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-metrics\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142479 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-reloader\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142493 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ngvn\" (UniqueName: \"kubernetes.io/projected/a33768cf-18ec-4cec-94fb-303b0779eb59-kube-api-access-5ngvn\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142518 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-metrics-certs\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143035 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-conf\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143028 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-sockets\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.143141 4913 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.143192 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist podName:fda16a07-5908-4736-9835-a29ce1f85a7e nodeName:}" failed. No retries permitted until 2026-01-21 06:47:13.643175424 +0000 UTC m=+723.439535097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist") pod "speaker-qpr6d" (UID: "fda16a07-5908-4736-9835-a29ce1f85a7e") : secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143141 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fda16a07-5908-4736-9835-a29ce1f85a7e-metallb-excludel2\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143408 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f6668fc-0d01-4942-abbe-758690c86480-frr-startup\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143434 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-reloader\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143479 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-metrics\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.147556 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-metrics-certs\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.147554 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f6668fc-0d01-4942-abbe-758690c86480-metrics-certs\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.148383 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a33768cf-18ec-4cec-94fb-303b0779eb59-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.164848 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ngvn\" (UniqueName: \"kubernetes.io/projected/a33768cf-18ec-4cec-94fb-303b0779eb59-kube-api-access-5ngvn\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.169625 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjxb\" (UniqueName: \"kubernetes.io/projected/1f6668fc-0d01-4942-abbe-758690c86480-kube-api-access-shjxb\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.169712 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9tr\" (UniqueName: \"kubernetes.io/projected/fda16a07-5908-4736-9835-a29ce1f85a7e-kube-api-access-ww9tr\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.242998 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-metrics-certs\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.243041 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gszl\" (UniqueName: \"kubernetes.io/projected/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-kube-api-access-8gszl\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.243085 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-cert\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.244786 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.247272 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-metrics-certs\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.257794 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-cert\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.261753 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gszl\" (UniqueName: \"kubernetes.io/projected/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-kube-api-access-8gszl\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.322904 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.455217 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.464799 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.609166 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pc8gk"] Jan 21 06:47:13 crc kubenswrapper[4913]: W0121 06:47:13.623653 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf59b1ff9_32cd_4fa1_916b_02dd65f8f75c.slice/crio-a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9 WatchSource:0}: Error finding container a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9: Status 404 returned error can't find the container with id a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9 Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.650322 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.650519 4913 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.650662 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist podName:fda16a07-5908-4736-9835-a29ce1f85a7e nodeName:}" failed. No retries permitted until 2026-01-21 06:47:14.650637836 +0000 UTC m=+724.446997529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist") pod "speaker-qpr6d" (UID: "fda16a07-5908-4736-9835-a29ce1f85a7e") : secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.701804 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"c9cd52245594a83181d0378a9d6bc65f4d633eb1a9bb3e77229a0b995736f558"} Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.702856 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pc8gk" event={"ID":"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c","Type":"ContainerStarted","Data":"a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9"} Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.910233 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8"] Jan 21 06:47:13 crc kubenswrapper[4913]: W0121 06:47:13.911509 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda33768cf_18ec_4cec_94fb_303b0779eb59.slice/crio-e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745 WatchSource:0}: Error finding container e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745: Status 404 returned error can't find the container with id e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745 Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.668950 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.686998 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.723337 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pc8gk" event={"ID":"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c","Type":"ContainerStarted","Data":"e18186b11fda2f691e21af3347a0ebbc4d394f52fd77f1a4624b5db704c7d3dc"} Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.727258 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" event={"ID":"a33768cf-18ec-4cec-94fb-303b0779eb59","Type":"ContainerStarted","Data":"e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745"} Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.743037 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:14 crc kubenswrapper[4913]: W0121 06:47:14.776972 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda16a07_5908_4736_9835_a29ce1f85a7e.slice/crio-1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b WatchSource:0}: Error finding container 1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b: Status 404 returned error can't find the container with id 1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b Jan 21 06:47:15 crc kubenswrapper[4913]: I0121 06:47:15.737144 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpr6d" event={"ID":"fda16a07-5908-4736-9835-a29ce1f85a7e","Type":"ContainerStarted","Data":"cda30f4c90493441fcfcd6fba83f7eea2db3df233d7da622acec5e58bcdcab38"} Jan 21 06:47:15 crc kubenswrapper[4913]: I0121 06:47:15.737556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpr6d" event={"ID":"fda16a07-5908-4736-9835-a29ce1f85a7e","Type":"ContainerStarted","Data":"1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b"} Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.757281 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pc8gk" event={"ID":"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c","Type":"ContainerStarted","Data":"884a4fba2e168b17b4247e4425fd28ab08bb1279dbae39a4eefbf202191a6dda"} Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.758884 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.765809 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpr6d" event={"ID":"fda16a07-5908-4736-9835-a29ce1f85a7e","Type":"ContainerStarted","Data":"f8a1d591bc77857179fddb77b2e338e086699fcdf8729efede4e4dfc1769631f"} Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.766247 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.775263 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-pc8gk" podStartSLOduration=2.9120213169999998 podStartE2EDuration="5.775244568s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:13.721823501 +0000 UTC m=+723.518183184" lastFinishedPulling="2026-01-21 06:47:16.585046762 +0000 UTC m=+726.381406435" observedRunningTime="2026-01-21 06:47:17.773154651 +0000 UTC m=+727.569514334" watchObservedRunningTime="2026-01-21 06:47:17.775244568 +0000 UTC m=+727.571604241" Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.793690 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qpr6d" podStartSLOduration=4.242662338 podStartE2EDuration="5.793672229s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:15.032779066 +0000 UTC m=+724.829138739" lastFinishedPulling="2026-01-21 06:47:16.583788957 +0000 UTC m=+726.380148630" observedRunningTime="2026-01-21 06:47:17.787131718 +0000 UTC m=+727.583491401" watchObservedRunningTime="2026-01-21 06:47:17.793672229 +0000 UTC m=+727.590031922" Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.791875 4913 generic.go:334] "Generic (PLEG): container finished" podID="1f6668fc-0d01-4942-abbe-758690c86480" containerID="ada96d127aff0cb00e74e0727d183d8bf979048c153222547856b7e4a93f96ba" exitCode=0 Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.791957 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerDied","Data":"ada96d127aff0cb00e74e0727d183d8bf979048c153222547856b7e4a93f96ba"} Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.795449 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" event={"ID":"a33768cf-18ec-4cec-94fb-303b0779eb59","Type":"ContainerStarted","Data":"30bea442576368f3c76ee6931135042d67a96aef9643c32bac4d75ef23447de5"} Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.796132 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.850006 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" podStartSLOduration=3.066296596 podStartE2EDuration="9.849979148s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:13.913666261 +0000 UTC m=+723.710025934" lastFinishedPulling="2026-01-21 06:47:20.697348803 +0000 UTC m=+730.493708486" observedRunningTime="2026-01-21 06:47:21.842432338 +0000 UTC m=+731.638792011" watchObservedRunningTime="2026-01-21 06:47:21.849979148 +0000 UTC m=+731.646338861" Jan 21 06:47:22 crc kubenswrapper[4913]: I0121 06:47:22.805938 4913 generic.go:334] "Generic (PLEG): container finished" podID="1f6668fc-0d01-4942-abbe-758690c86480" containerID="806ff8ef8e3b96741bc84a91bcfe90f131d9476d6d3a257f3c7a0df319ae0e16" exitCode=0 Jan 21 06:47:22 crc kubenswrapper[4913]: I0121 06:47:22.806012 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerDied","Data":"806ff8ef8e3b96741bc84a91bcfe90f131d9476d6d3a257f3c7a0df319ae0e16"} Jan 21 06:47:23 crc kubenswrapper[4913]: I0121 06:47:23.328621 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:23 crc kubenswrapper[4913]: I0121 06:47:23.817685 4913 generic.go:334] "Generic (PLEG): container finished" podID="1f6668fc-0d01-4942-abbe-758690c86480" containerID="4152e49662ae3a85bc3bc86a8fc5ba7cc32e765f76b43dddbd288a395e5f7e88" exitCode=0 Jan 21 06:47:23 crc kubenswrapper[4913]: I0121 06:47:23.817868 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerDied","Data":"4152e49662ae3a85bc3bc86a8fc5ba7cc32e765f76b43dddbd288a395e5f7e88"} Jan 21 06:47:24 crc kubenswrapper[4913]: I0121 06:47:24.827897 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"0f79720eefbe1c707a734b0ae09aab1507dfc75991780b1c316368316e9cbd5f"} Jan 21 06:47:24 crc kubenswrapper[4913]: I0121 06:47:24.828295 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"2df5e7c00bde0880968e7e0d861462c796de5f8c66bd5174c44a6c83f5f42029"} Jan 21 06:47:24 crc kubenswrapper[4913]: I0121 06:47:24.828318 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"e0b60c06ce24c930a8be893df63221cb4cb39510343e5a702f4ab245646fe24f"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.839383 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"d9d411daa074b4dd216952569b8b74d7e7fe4c7882bc53c8ddf92aa5e6ea46ea"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.839935 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"2c9178cd32d8ec6b82d069a475beb64e05d88663759500858bdbfb89ff73ef02"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.839968 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"9cad03763e3e5997ee9c989b7ff955af2724d7c91bf4731a560aafe148b8f48e"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.840001 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:28 crc kubenswrapper[4913]: I0121 06:47:28.455846 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:28 crc kubenswrapper[4913]: I0121 06:47:28.489628 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:28 crc kubenswrapper[4913]: I0121 06:47:28.515544 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zwvdk" podStartSLOduration=9.437657541 podStartE2EDuration="16.515521001s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:13.625665735 +0000 UTC m=+723.422025418" lastFinishedPulling="2026-01-21 06:47:20.703529205 +0000 UTC m=+730.499888878" observedRunningTime="2026-01-21 06:47:25.86715682 +0000 UTC m=+735.663516533" watchObservedRunningTime="2026-01-21 06:47:28.515521001 +0000 UTC m=+738.311880704" Jan 21 06:47:33 crc kubenswrapper[4913]: I0121 06:47:33.473965 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:34 crc kubenswrapper[4913]: I0121 06:47:34.747790 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:38 crc kubenswrapper[4913]: I0121 06:47:38.319646 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:47:38 crc kubenswrapper[4913]: I0121 06:47:38.320290 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.636298 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.637378 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.639928 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-9v972" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.639947 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.639957 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.700603 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.745319 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"mariadb-operator-index-hlfnm\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.846767 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"mariadb-operator-index-hlfnm\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.866879 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"mariadb-operator-index-hlfnm\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.957091 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:42 crc kubenswrapper[4913]: I0121 06:47:42.360050 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:42 crc kubenswrapper[4913]: I0121 06:47:42.966304 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerStarted","Data":"b5b38486a6db7d95112680710d969fd5155e2c4c2915cdad1a7de23e7a82cc84"} Jan 21 06:47:43 crc kubenswrapper[4913]: I0121 06:47:43.461613 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:43 crc kubenswrapper[4913]: I0121 06:47:43.643928 4913 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:47:44 crc kubenswrapper[4913]: I0121 06:47:44.980236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerStarted","Data":"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0"} Jan 21 06:47:45 crc kubenswrapper[4913]: I0121 06:47:45.005122 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-hlfnm" podStartSLOduration=2.473251419 podStartE2EDuration="4.005090759s" podCreationTimestamp="2026-01-21 06:47:41 +0000 UTC" firstStartedPulling="2026-01-21 06:47:42.362743783 +0000 UTC m=+752.159103456" lastFinishedPulling="2026-01-21 06:47:43.894583123 +0000 UTC m=+753.690942796" observedRunningTime="2026-01-21 06:47:44.995428661 +0000 UTC m=+754.791788374" watchObservedRunningTime="2026-01-21 06:47:45.005090759 +0000 UTC m=+754.801450472" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.007112 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.623390 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.624975 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.643535 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.823192 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"mariadb-operator-index-jqn8q\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.925442 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"mariadb-operator-index-jqn8q\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.957499 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"mariadb-operator-index-jqn8q\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.959387 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.995302 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-hlfnm" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" containerID="cri-o://ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" gracePeriod=2 Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.180034 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:47:47 crc kubenswrapper[4913]: W0121 06:47:47.188703 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod211a0853_fb6a_4002_98be_aa01c99eaa7d.slice/crio-db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b WatchSource:0}: Error finding container db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b: Status 404 returned error can't find the container with id db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.315848 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.437256 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"bf14c82c-256f-4096-beb2-4e8be30564aa\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.444779 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27" (OuterVolumeSpecName: "kube-api-access-8gn27") pod "bf14c82c-256f-4096-beb2-4e8be30564aa" (UID: "bf14c82c-256f-4096-beb2-4e8be30564aa"). InnerVolumeSpecName "kube-api-access-8gn27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.538759 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") on node \"crc\" DevicePath \"\"" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007360 4913 generic.go:334] "Generic (PLEG): container finished" podID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" exitCode=0 Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007430 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007459 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerDied","Data":"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007492 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerDied","Data":"b5b38486a6db7d95112680710d969fd5155e2c4c2915cdad1a7de23e7a82cc84"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007520 4913 scope.go:117] "RemoveContainer" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.012244 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerStarted","Data":"08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.012298 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerStarted","Data":"db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.034777 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-jqn8q" podStartSLOduration=1.554799515 podStartE2EDuration="2.034761856s" podCreationTimestamp="2026-01-21 06:47:46 +0000 UTC" firstStartedPulling="2026-01-21 06:47:47.207921456 +0000 UTC m=+757.004281129" lastFinishedPulling="2026-01-21 06:47:47.687883787 +0000 UTC m=+757.484243470" observedRunningTime="2026-01-21 06:47:48.033415669 +0000 UTC m=+757.829775362" watchObservedRunningTime="2026-01-21 06:47:48.034761856 +0000 UTC m=+757.831121529" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.050029 4913 scope.go:117] "RemoveContainer" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" Jan 21 06:47:48 crc kubenswrapper[4913]: E0121 06:47:48.050721 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0\": container with ID starting with ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0 not found: ID does not exist" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.050825 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0"} err="failed to get container status \"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0\": rpc error: code = NotFound desc = could not find container \"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0\": container with ID starting with ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0 not found: ID does not exist" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.065048 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.069383 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.537493 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" path="/var/lib/kubelet/pods/bf14c82c-256f-4096-beb2-4e8be30564aa/volumes" Jan 21 06:47:56 crc kubenswrapper[4913]: I0121 06:47:56.960117 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:56 crc kubenswrapper[4913]: I0121 06:47:56.963330 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:56 crc kubenswrapper[4913]: I0121 06:47:56.995008 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.104107 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.464403 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:47:57 crc kubenswrapper[4913]: E0121 06:47:57.464650 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.464672 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.464798 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.465522 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.467281 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.477572 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.576470 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.576653 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.576699 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.678447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.678519 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.678694 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.679900 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.679967 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.712955 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.794384 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:58 crc kubenswrapper[4913]: I0121 06:47:58.229891 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:47:59 crc kubenswrapper[4913]: I0121 06:47:59.091116 4913 generic.go:334] "Generic (PLEG): container finished" podID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerID="d9191512905a50023a8bd3340913a6390b0e97c743493bde552499fe3bccd78f" exitCode=0 Jan 21 06:47:59 crc kubenswrapper[4913]: I0121 06:47:59.091214 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"d9191512905a50023a8bd3340913a6390b0e97c743493bde552499fe3bccd78f"} Jan 21 06:47:59 crc kubenswrapper[4913]: I0121 06:47:59.091487 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerStarted","Data":"14c551895737557e89dec36d73483641542cf3a808c34f5b8d47e21ee1bbb538"} Jan 21 06:48:00 crc kubenswrapper[4913]: I0121 06:48:00.103299 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerStarted","Data":"78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe"} Jan 21 06:48:01 crc kubenswrapper[4913]: I0121 06:48:01.112760 4913 generic.go:334] "Generic (PLEG): container finished" podID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerID="78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe" exitCode=0 Jan 21 06:48:01 crc kubenswrapper[4913]: I0121 06:48:01.112826 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe"} Jan 21 06:48:02 crc kubenswrapper[4913]: I0121 06:48:02.126201 4913 generic.go:334] "Generic (PLEG): container finished" podID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerID="e2875581fbd572dea4f4e410e08bce794cd12bf464303d41fbc9d66b0d7fcef6" exitCode=0 Jan 21 06:48:02 crc kubenswrapper[4913]: I0121 06:48:02.126274 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"e2875581fbd572dea4f4e410e08bce794cd12bf464303d41fbc9d66b0d7fcef6"} Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.391666 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.466632 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.472366 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz" (OuterVolumeSpecName: "kube-api-access-ndzxz") pod "980a7b2a-b9d1-4935-ac4c-9ac4a4730138" (UID: "980a7b2a-b9d1-4935-ac4c-9ac4a4730138"). InnerVolumeSpecName "kube-api-access-ndzxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.567352 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.567447 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.567778 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.568744 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle" (OuterVolumeSpecName: "bundle") pod "980a7b2a-b9d1-4935-ac4c-9ac4a4730138" (UID: "980a7b2a-b9d1-4935-ac4c-9ac4a4730138"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.597260 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util" (OuterVolumeSpecName: "util") pod "980a7b2a-b9d1-4935-ac4c-9ac4a4730138" (UID: "980a7b2a-b9d1-4935-ac4c-9ac4a4730138"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.668632 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.668980 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:04 crc kubenswrapper[4913]: I0121 06:48:04.141263 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"14c551895737557e89dec36d73483641542cf3a808c34f5b8d47e21ee1bbb538"} Jan 21 06:48:04 crc kubenswrapper[4913]: I0121 06:48:04.141329 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c551895737557e89dec36d73483641542cf3a808c34f5b8d47e21ee1bbb538" Jan 21 06:48:04 crc kubenswrapper[4913]: I0121 06:48:04.141356 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.319735 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.320520 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.320628 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.321688 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.321826 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3" gracePeriod=600 Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179203 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3" exitCode=0 Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179235 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3"} Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179936 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908"} Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179972 4913 scope.go:117] "RemoveContainer" containerID="0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.273959 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:48:11 crc kubenswrapper[4913]: E0121 06:48:11.274571 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="pull" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274610 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="pull" Jan 21 06:48:11 crc kubenswrapper[4913]: E0121 06:48:11.274637 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="extract" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274644 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="extract" Jan 21 06:48:11 crc kubenswrapper[4913]: E0121 06:48:11.274662 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="util" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274669 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="util" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274780 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="extract" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.275177 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.276542 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-klgvv" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.276792 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.281236 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.306109 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.473668 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.473735 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.473810 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.574935 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.575194 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.575261 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.581320 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.581480 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.594135 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.894406 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:12 crc kubenswrapper[4913]: I0121 06:48:12.373361 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:48:12 crc kubenswrapper[4913]: W0121 06:48:12.378378 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463ce3c4_98b5_41f1_bf36_f271228094e5.slice/crio-cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4 WatchSource:0}: Error finding container cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4: Status 404 returned error can't find the container with id cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4 Jan 21 06:48:13 crc kubenswrapper[4913]: I0121 06:48:13.213024 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerStarted","Data":"cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4"} Jan 21 06:48:16 crc kubenswrapper[4913]: I0121 06:48:16.228112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerStarted","Data":"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63"} Jan 21 06:48:16 crc kubenswrapper[4913]: I0121 06:48:16.228777 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:16 crc kubenswrapper[4913]: I0121 06:48:16.253017 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" podStartSLOduration=1.8517759059999999 podStartE2EDuration="5.253000129s" podCreationTimestamp="2026-01-21 06:48:11 +0000 UTC" firstStartedPulling="2026-01-21 06:48:12.381551477 +0000 UTC m=+782.177911150" lastFinishedPulling="2026-01-21 06:48:15.7827757 +0000 UTC m=+785.579135373" observedRunningTime="2026-01-21 06:48:16.249807562 +0000 UTC m=+786.046167245" watchObservedRunningTime="2026-01-21 06:48:16.253000129 +0000 UTC m=+786.049359802" Jan 21 06:48:21 crc kubenswrapper[4913]: I0121 06:48:21.898466 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.455734 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.456997 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.459697 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-mqdgg" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.465826 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.570853 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"infra-operator-index-9rr22\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.672242 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"infra-operator-index-9rr22\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.704425 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"infra-operator-index-9rr22\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.772624 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.997425 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:48:27 crc kubenswrapper[4913]: W0121 06:48:27.016294 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40a34d4_0ef1_4aff_bc37_87c27e191d1f.slice/crio-cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5 WatchSource:0}: Error finding container cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5: Status 404 returned error can't find the container with id cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5 Jan 21 06:48:27 crc kubenswrapper[4913]: I0121 06:48:27.300575 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerStarted","Data":"cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5"} Jan 21 06:48:28 crc kubenswrapper[4913]: I0121 06:48:28.310742 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerStarted","Data":"2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f"} Jan 21 06:48:28 crc kubenswrapper[4913]: I0121 06:48:28.327692 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-9rr22" podStartSLOduration=1.4566955400000001 podStartE2EDuration="2.327661935s" podCreationTimestamp="2026-01-21 06:48:26 +0000 UTC" firstStartedPulling="2026-01-21 06:48:27.01820668 +0000 UTC m=+796.814566343" lastFinishedPulling="2026-01-21 06:48:27.889173065 +0000 UTC m=+797.685532738" observedRunningTime="2026-01-21 06:48:28.327555362 +0000 UTC m=+798.123915045" watchObservedRunningTime="2026-01-21 06:48:28.327661935 +0000 UTC m=+798.124021688" Jan 21 06:48:36 crc kubenswrapper[4913]: I0121 06:48:36.774378 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:36 crc kubenswrapper[4913]: I0121 06:48:36.775015 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:36 crc kubenswrapper[4913]: I0121 06:48:36.810649 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:37 crc kubenswrapper[4913]: I0121 06:48:37.402570 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.702653 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.703832 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.716785 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.716793 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.840152 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.840252 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.840313 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.941742 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.941851 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.941913 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.942258 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.942778 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.970673 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:39 crc kubenswrapper[4913]: I0121 06:48:39.029188 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:39 crc kubenswrapper[4913]: I0121 06:48:39.271498 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:48:39 crc kubenswrapper[4913]: W0121 06:48:39.280195 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a93fdf_fffb_4344_8ac8_81d8be41eea7.slice/crio-7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5 WatchSource:0}: Error finding container 7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5: Status 404 returned error can't find the container with id 7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5 Jan 21 06:48:39 crc kubenswrapper[4913]: I0121 06:48:39.383011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerStarted","Data":"7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5"} Jan 21 06:48:40 crc kubenswrapper[4913]: I0121 06:48:40.388352 4913 generic.go:334] "Generic (PLEG): container finished" podID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerID="660368d7d30a6dcd15b89683468c16579bae9e6ba5e62cde1ef85f9aba8de9d8" exitCode=0 Jan 21 06:48:40 crc kubenswrapper[4913]: I0121 06:48:40.388393 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"660368d7d30a6dcd15b89683468c16579bae9e6ba5e62cde1ef85f9aba8de9d8"} Jan 21 06:48:41 crc kubenswrapper[4913]: E0121 06:48:41.630215 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a93fdf_fffb_4344_8ac8_81d8be41eea7.slice/crio-conmon-e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a93fdf_fffb_4344_8ac8_81d8be41eea7.slice/crio-e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:48:42 crc kubenswrapper[4913]: I0121 06:48:42.404006 4913 generic.go:334] "Generic (PLEG): container finished" podID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerID="e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11" exitCode=0 Jan 21 06:48:42 crc kubenswrapper[4913]: I0121 06:48:42.404354 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11"} Jan 21 06:48:43 crc kubenswrapper[4913]: I0121 06:48:43.413851 4913 generic.go:334] "Generic (PLEG): container finished" podID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerID="7ea6d30dbbc206edb2f162346b01f1b70cea4ff52c09855b5688ceae555cd86f" exitCode=0 Jan 21 06:48:43 crc kubenswrapper[4913]: I0121 06:48:43.413917 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"7ea6d30dbbc206edb2f162346b01f1b70cea4ff52c09855b5688ceae555cd86f"} Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.707284 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.819880 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.820023 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.820095 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.823053 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle" (OuterVolumeSpecName: "bundle") pod "f9a93fdf-fffb-4344-8ac8-81d8be41eea7" (UID: "f9a93fdf-fffb-4344-8ac8-81d8be41eea7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.829925 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util" (OuterVolumeSpecName: "util") pod "f9a93fdf-fffb-4344-8ac8-81d8be41eea7" (UID: "f9a93fdf-fffb-4344-8ac8-81d8be41eea7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.831052 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f" (OuterVolumeSpecName: "kube-api-access-4wg8f") pod "f9a93fdf-fffb-4344-8ac8-81d8be41eea7" (UID: "f9a93fdf-fffb-4344-8ac8-81d8be41eea7"). InnerVolumeSpecName "kube-api-access-4wg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.921308 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.921362 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.921383 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:45 crc kubenswrapper[4913]: I0121 06:48:45.434330 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5"} Jan 21 06:48:45 crc kubenswrapper[4913]: I0121 06:48:45.434729 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5" Jan 21 06:48:45 crc kubenswrapper[4913]: I0121 06:48:45.434426 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.194998 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:48:54 crc kubenswrapper[4913]: E0121 06:48:54.195638 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="extract" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195653 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="extract" Jan 21 06:48:54 crc kubenswrapper[4913]: E0121 06:48:54.195665 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="util" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195671 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="util" Jan 21 06:48:54 crc kubenswrapper[4913]: E0121 06:48:54.195683 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="pull" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195691 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="pull" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195819 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="extract" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.196246 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.198236 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4h7gf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.198496 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.213931 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.243615 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.243658 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.243931 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.344750 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.344847 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.344871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.350997 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.360136 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.361100 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.513903 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.748225 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:48:55 crc kubenswrapper[4913]: I0121 06:48:55.493199 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerStarted","Data":"34dad13f69ec8ebac45464947f13649925c0f206b2f50748da475e0fdda03067"} Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.503529 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerStarted","Data":"8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f"} Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.503904 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.526031 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" podStartSLOduration=1.2649362960000001 podStartE2EDuration="3.526015846s" podCreationTimestamp="2026-01-21 06:48:54 +0000 UTC" firstStartedPulling="2026-01-21 06:48:54.759580234 +0000 UTC m=+824.555939917" lastFinishedPulling="2026-01-21 06:48:57.020659794 +0000 UTC m=+826.817019467" observedRunningTime="2026-01-21 06:48:57.522812598 +0000 UTC m=+827.319172271" watchObservedRunningTime="2026-01-21 06:48:57.526015846 +0000 UTC m=+827.322375519" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.726337 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.727427 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.729684 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openshift-service-ca.crt" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.730398 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"galera-openstack-dockercfg-6gtwj" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.732544 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"kube-root-ca.crt" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.732582 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-scripts" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.732561 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-config-data" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.736252 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.737222 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.745383 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.746535 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.747013 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.756998 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.760628 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892226 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892292 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892333 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892367 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892400 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892440 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892471 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892500 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892533 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892644 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892690 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892723 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892769 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892807 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892852 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892881 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892900 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.893009 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994268 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994324 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994344 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994367 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994384 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994398 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994429 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994471 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994493 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994517 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994542 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994565 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994628 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994650 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994672 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994692 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994720 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994741 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995012 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") device mount path \"/mnt/openstack/pv10\"" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995496 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995012 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") device mount path \"/mnt/openstack/pv02\"" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995538 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995659 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995965 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") device mount path \"/mnt/openstack/pv05\"" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996003 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996443 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996510 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996550 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.997497 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.997868 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.998001 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.999150 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:57.996098 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.016335 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.017840 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.022077 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.035074 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.038948 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.044612 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.071829 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.093909 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.104802 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.535969 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.544530 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.550091 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:48:58 crc kubenswrapper[4913]: W0121 06:48:58.557493 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddedf2cc4_5f64_40c5_83da_cf1e0cfebf6c.slice/crio-275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac WatchSource:0}: Error finding container 275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac: Status 404 returned error can't find the container with id 275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac Jan 21 06:48:58 crc kubenswrapper[4913]: W0121 06:48:58.560774 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedaae817_2cda_4274_bad0_53165cffa224.slice/crio-9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d WatchSource:0}: Error finding container 9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d: Status 404 returned error can't find the container with id 9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d Jan 21 06:48:59 crc kubenswrapper[4913]: I0121 06:48:59.515707 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerStarted","Data":"5f5d4f1ef26e68f7b2d31a9b3d84d0da1ff312a47ab5657edc54afc49f04f096"} Jan 21 06:48:59 crc kubenswrapper[4913]: I0121 06:48:59.516888 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerStarted","Data":"9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d"} Jan 21 06:48:59 crc kubenswrapper[4913]: I0121 06:48:59.521182 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerStarted","Data":"275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac"} Jan 21 06:49:04 crc kubenswrapper[4913]: I0121 06:49:04.517848 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.269051 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.269733 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.272149 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"memcached-config-data" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.272890 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"memcached-memcached-dockercfg-g4l2t" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.287128 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.448345 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.448759 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.448870 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.555212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.555316 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.555334 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.556018 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.557872 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.567462 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerStarted","Data":"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8"} Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.601629 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.883243 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.065449 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:49:07 crc kubenswrapper[4913]: W0121 06:49:07.070665 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac820b36_83fb_44ca_97b0_6181846a5ef3.slice/crio-7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92 WatchSource:0}: Error finding container 7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92: Status 404 returned error can't find the container with id 7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92 Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.573634 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerStarted","Data":"7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92"} Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.576474 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerStarted","Data":"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49"} Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.578278 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerStarted","Data":"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf"} Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.060404 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.061775 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.064245 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-sdlkw" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.079656 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.189422 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"rabbitmq-cluster-operator-index-5dtkj\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.290783 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"rabbitmq-cluster-operator-index-5dtkj\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.320691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"rabbitmq-cluster-operator-index-5dtkj\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.379263 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.591624 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerStarted","Data":"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3"} Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.592268 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.787138 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/memcached-0" podStartSLOduration=1.932974258 podStartE2EDuration="3.787106032s" podCreationTimestamp="2026-01-21 06:49:06 +0000 UTC" firstStartedPulling="2026-01-21 06:49:07.072934562 +0000 UTC m=+836.869294235" lastFinishedPulling="2026-01-21 06:49:08.927066336 +0000 UTC m=+838.723426009" observedRunningTime="2026-01-21 06:49:09.612358022 +0000 UTC m=+839.408717705" watchObservedRunningTime="2026-01-21 06:49:09.787106032 +0000 UTC m=+839.583465735" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.788365 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.597446 4913 generic.go:334] "Generic (PLEG): container finished" podID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" exitCode=0 Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.597518 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerDied","Data":"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49"} Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.600730 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerStarted","Data":"f8fbd38ff1590a71df6d9f315408484aabe627daa085b88b69fdaab05a28c092"} Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.609305 4913 generic.go:334] "Generic (PLEG): container finished" podID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" exitCode=0 Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.609367 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerDied","Data":"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8"} Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.611715 4913 generic.go:334] "Generic (PLEG): container finished" podID="edaae817-2cda-4274-bad0-53165cffa224" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" exitCode=0 Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.612311 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerDied","Data":"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.618101 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerStarted","Data":"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.619850 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerStarted","Data":"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.621185 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerStarted","Data":"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.645862 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-0" podStartSLOduration=7.876087523 podStartE2EDuration="15.645843473s" podCreationTimestamp="2026-01-21 06:48:56 +0000 UTC" firstStartedPulling="2026-01-21 06:48:58.559436565 +0000 UTC m=+828.355796238" lastFinishedPulling="2026-01-21 06:49:06.329192515 +0000 UTC m=+836.125552188" observedRunningTime="2026-01-21 06:49:11.644207068 +0000 UTC m=+841.440566741" watchObservedRunningTime="2026-01-21 06:49:11.645843473 +0000 UTC m=+841.442203146" Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.667972 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-1" podStartSLOduration=7.857434921 podStartE2EDuration="15.667953789s" podCreationTimestamp="2026-01-21 06:48:56 +0000 UTC" firstStartedPulling="2026-01-21 06:48:58.5625813 +0000 UTC m=+828.358941013" lastFinishedPulling="2026-01-21 06:49:06.373100198 +0000 UTC m=+836.169459881" observedRunningTime="2026-01-21 06:49:11.665863281 +0000 UTC m=+841.462222954" watchObservedRunningTime="2026-01-21 06:49:11.667953789 +0000 UTC m=+841.464313472" Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.701855 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-2" podStartSLOduration=7.920944941 podStartE2EDuration="15.701836808s" podCreationTimestamp="2026-01-21 06:48:56 +0000 UTC" firstStartedPulling="2026-01-21 06:48:58.543919718 +0000 UTC m=+828.340279391" lastFinishedPulling="2026-01-21 06:49:06.324811585 +0000 UTC m=+836.121171258" observedRunningTime="2026-01-21 06:49:11.696500271 +0000 UTC m=+841.492859954" watchObservedRunningTime="2026-01-21 06:49:11.701836808 +0000 UTC m=+841.498196481" Jan 21 06:49:15 crc kubenswrapper[4913]: I0121 06:49:15.652703 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerStarted","Data":"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30"} Jan 21 06:49:15 crc kubenswrapper[4913]: I0121 06:49:15.676364 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" podStartSLOduration=1.2871669350000001 podStartE2EDuration="6.676337994s" podCreationTimestamp="2026-01-21 06:49:09 +0000 UTC" firstStartedPulling="2026-01-21 06:49:09.792206161 +0000 UTC m=+839.588565854" lastFinishedPulling="2026-01-21 06:49:15.18137725 +0000 UTC m=+844.977736913" observedRunningTime="2026-01-21 06:49:15.673179828 +0000 UTC m=+845.469539541" watchObservedRunningTime="2026-01-21 06:49:15.676337994 +0000 UTC m=+845.472697707" Jan 21 06:49:16 crc kubenswrapper[4913]: I0121 06:49:16.885489 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.073242 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.073827 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.095137 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.095201 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.105851 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.105916 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:19 crc kubenswrapper[4913]: I0121 06:49:19.380059 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:19 crc kubenswrapper[4913]: I0121 06:49:19.380136 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:19 crc kubenswrapper[4913]: I0121 06:49:19.424467 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:20 crc kubenswrapper[4913]: I0121 06:49:20.445405 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:20 crc kubenswrapper[4913]: I0121 06:49:20.523500 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.808910 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.810256 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.813802 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.828435 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.938444 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.938530 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.039748 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.039857 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.040710 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.060876 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.186312 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.401905 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.725927 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerStarted","Data":"e41ffcdab252a256602e7c621f585fee394f0f1b8f113871d9c3d7de9d58193b"} Jan 21 06:49:28 crc kubenswrapper[4913]: I0121 06:49:28.163912 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/openstack-galera-2" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" probeResult="failure" output=< Jan 21 06:49:28 crc kubenswrapper[4913]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 21 06:49:28 crc kubenswrapper[4913]: > Jan 21 06:49:29 crc kubenswrapper[4913]: I0121 06:49:29.416016 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:32 crc kubenswrapper[4913]: I0121 06:49:32.775724 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerStarted","Data":"ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a"} Jan 21 06:49:32 crc kubenswrapper[4913]: I0121 06:49:32.792265 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/root-account-create-update-thtbk" podStartSLOduration=6.7922460000000004 podStartE2EDuration="6.792246s" podCreationTimestamp="2026-01-21 06:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:49:32.789713191 +0000 UTC m=+862.586072884" watchObservedRunningTime="2026-01-21 06:49:32.792246 +0000 UTC m=+862.588605673" Jan 21 06:49:34 crc kubenswrapper[4913]: I0121 06:49:34.792047 4913 generic.go:334] "Generic (PLEG): container finished" podID="09733cef-ac9b-4a13-92a5-4b416079180f" containerID="ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a" exitCode=0 Jan 21 06:49:34 crc kubenswrapper[4913]: I0121 06:49:34.792137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerDied","Data":"ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a"} Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.168746 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.294089 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"09733cef-ac9b-4a13-92a5-4b416079180f\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.294225 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"09733cef-ac9b-4a13-92a5-4b416079180f\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.294753 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09733cef-ac9b-4a13-92a5-4b416079180f" (UID: "09733cef-ac9b-4a13-92a5-4b416079180f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.300274 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45" (OuterVolumeSpecName: "kube-api-access-6wh45") pod "09733cef-ac9b-4a13-92a5-4b416079180f" (UID: "09733cef-ac9b-4a13-92a5-4b416079180f"). InnerVolumeSpecName "kube-api-access-6wh45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.338840 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.395573 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.395625 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.423378 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.810280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerDied","Data":"e41ffcdab252a256602e7c621f585fee394f0f1b8f113871d9c3d7de9d58193b"} Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.810348 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e41ffcdab252a256602e7c621f585fee394f0f1b8f113871d9c3d7de9d58193b" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.810298 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.685488 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:49:40 crc kubenswrapper[4913]: E0121 06:49:40.686983 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" containerName="mariadb-account-create-update" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.687059 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" containerName="mariadb-account-create-update" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.687228 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" containerName="mariadb-account-create-update" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.688050 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.689817 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.700527 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.856891 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.857153 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.857305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.958769 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.958871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.959000 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.959728 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.959819 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.991342 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.002477 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.479497 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.847415 4913 generic.go:334] "Generic (PLEG): container finished" podID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerID="436316e77fad673adee43600b81c8e8cb659f723e40fde5ac692b7f2f5e51c80" exitCode=0 Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.847525 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"436316e77fad673adee43600b81c8e8cb659f723e40fde5ac692b7f2f5e51c80"} Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.847751 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerStarted","Data":"c884ae2c47d957e330460d081e220f62d940e2a97f174bd338c9f15f97922f16"} Jan 21 06:49:42 crc kubenswrapper[4913]: I0121 06:49:42.660922 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:42 crc kubenswrapper[4913]: I0121 06:49:42.755994 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:43 crc kubenswrapper[4913]: I0121 06:49:43.865407 4913 generic.go:334] "Generic (PLEG): container finished" podID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerID="37009c48c11ee62bd23237579f9cc9c8d427c5cbaddb700f28802586ebc40376" exitCode=0 Jan 21 06:49:43 crc kubenswrapper[4913]: I0121 06:49:43.865504 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"37009c48c11ee62bd23237579f9cc9c8d427c5cbaddb700f28802586ebc40376"} Jan 21 06:49:44 crc kubenswrapper[4913]: I0121 06:49:44.873990 4913 generic.go:334] "Generic (PLEG): container finished" podID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerID="ab7eba0415a79bbb3100d97d9966a99002b3c45fe402ca2d92dfeca4328093d3" exitCode=0 Jan 21 06:49:44 crc kubenswrapper[4913]: I0121 06:49:44.874327 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"ab7eba0415a79bbb3100d97d9966a99002b3c45fe402ca2d92dfeca4328093d3"} Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.235688 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.357304 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.357405 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.357797 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.358120 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle" (OuterVolumeSpecName: "bundle") pod "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" (UID: "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.358289 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.365555 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm" (OuterVolumeSpecName: "kube-api-access-gbbdm") pod "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" (UID: "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff"). InnerVolumeSpecName "kube-api-access-gbbdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.390213 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util" (OuterVolumeSpecName: "util") pod "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" (UID: "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.459452 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.459640 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.903681 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"c884ae2c47d957e330460d081e220f62d940e2a97f174bd338c9f15f97922f16"} Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.903730 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c884ae2c47d957e330460d081e220f62d940e2a97f174bd338c9f15f97922f16" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.903763 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.024320 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:49:56 crc kubenswrapper[4913]: E0121 06:49:56.026063 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="pull" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026166 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="pull" Jan 21 06:49:56 crc kubenswrapper[4913]: E0121 06:49:56.026260 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="util" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026339 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="util" Jan 21 06:49:56 crc kubenswrapper[4913]: E0121 06:49:56.026419 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="extract" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026499 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="extract" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026731 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="extract" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.027296 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.034909 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-wxng4" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.056972 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.184170 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"rabbitmq-cluster-operator-779fc9694b-d8cz5\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.286244 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"rabbitmq-cluster-operator-779fc9694b-d8cz5\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.316262 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"rabbitmq-cluster-operator-779fc9694b-d8cz5\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.351020 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.473500 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.475000 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.492726 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.589541 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.589752 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.589842 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691231 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691265 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691726 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.692303 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.710650 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.813506 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.816759 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:56.998394 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerStarted","Data":"35b959c40aff587948c5fd74b98b898c0bc76e951ec34079c1bec3b80111a1d1"} Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.124943 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:49:57 crc kubenswrapper[4913]: W0121 06:49:57.129226 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63b56df_1330_4e60_8eb8_000cdd3d6a19.slice/crio-451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07 WatchSource:0}: Error finding container 451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07: Status 404 returned error can't find the container with id 451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07 Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.659312 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.660488 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.663718 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.808005 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.808473 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.808642 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910067 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910152 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910190 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910617 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910876 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.936160 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.984650 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.008657 4913 generic.go:334] "Generic (PLEG): container finished" podID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerID="bc17a3b4e7007c1a6edef6ce6cf75a6a6d868472071ad902e9b4ecff476c27f7" exitCode=0 Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.008709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"bc17a3b4e7007c1a6edef6ce6cf75a6a6d868472071ad902e9b4ecff476c27f7"} Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.008740 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerStarted","Data":"451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07"} Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.445425 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:49:58 crc kubenswrapper[4913]: W0121 06:49:58.457820 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdfc921b_6ade_4913_afd4_4b75ebcead15.slice/crio-f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba WatchSource:0}: Error finding container f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba: Status 404 returned error can't find the container with id f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba Jan 21 06:49:59 crc kubenswrapper[4913]: I0121 06:49:59.018148 4913 generic.go:334] "Generic (PLEG): container finished" podID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" exitCode=0 Jan 21 06:49:59 crc kubenswrapper[4913]: I0121 06:49:59.018408 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498"} Jan 21 06:49:59 crc kubenswrapper[4913]: I0121 06:49:59.018733 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerStarted","Data":"f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.039625 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"7cfbbb260090cca3cda10b88b0e9c196363805754f66ac431ea64beb1eda9291"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.039904 4913 generic.go:334] "Generic (PLEG): container finished" podID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerID="7cfbbb260090cca3cda10b88b0e9c196363805754f66ac431ea64beb1eda9291" exitCode=0 Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.042507 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerStarted","Data":"3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.044773 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerStarted","Data":"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.083371 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" podStartSLOduration=1.5687557810000001 podStartE2EDuration="5.083341584s" podCreationTimestamp="2026-01-21 06:49:56 +0000 UTC" firstStartedPulling="2026-01-21 06:49:56.826382822 +0000 UTC m=+886.622742495" lastFinishedPulling="2026-01-21 06:50:00.340968625 +0000 UTC m=+890.137328298" observedRunningTime="2026-01-21 06:50:01.078133292 +0000 UTC m=+890.874492975" watchObservedRunningTime="2026-01-21 06:50:01.083341584 +0000 UTC m=+890.879701297" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.052500 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerStarted","Data":"29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337"} Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.055866 4913 generic.go:334] "Generic (PLEG): container finished" podID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" exitCode=0 Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.055984 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377"} Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.098029 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xvbq" podStartSLOduration=2.605754447 podStartE2EDuration="6.097145551s" podCreationTimestamp="2026-01-21 06:49:56 +0000 UTC" firstStartedPulling="2026-01-21 06:49:58.01047848 +0000 UTC m=+887.806838153" lastFinishedPulling="2026-01-21 06:50:01.501869594 +0000 UTC m=+891.298229257" observedRunningTime="2026-01-21 06:50:02.077482457 +0000 UTC m=+891.873842170" watchObservedRunningTime="2026-01-21 06:50:02.097145551 +0000 UTC m=+891.893505254" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.653451 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.654871 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.662949 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.776194 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.776674 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.776726 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.877705 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.878630 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.879017 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.878988 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.878525 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.936489 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.980604 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:03 crc kubenswrapper[4913]: I0121 06:50:03.133713 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerStarted","Data":"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e"} Jan 21 06:50:03 crc kubenswrapper[4913]: I0121 06:50:03.156496 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvqfr" podStartSLOduration=3.436100337 podStartE2EDuration="6.156477283s" podCreationTimestamp="2026-01-21 06:49:57 +0000 UTC" firstStartedPulling="2026-01-21 06:49:59.83642477 +0000 UTC m=+889.632784453" lastFinishedPulling="2026-01-21 06:50:02.556801726 +0000 UTC m=+892.353161399" observedRunningTime="2026-01-21 06:50:03.15048078 +0000 UTC m=+892.946840473" watchObservedRunningTime="2026-01-21 06:50:03.156477283 +0000 UTC m=+892.952836956" Jan 21 06:50:03 crc kubenswrapper[4913]: I0121 06:50:03.476818 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:03 crc kubenswrapper[4913]: W0121 06:50:03.497756 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa7f103e_4216_4cf5_b6a7_42b907744bba.slice/crio-ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05 WatchSource:0}: Error finding container ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05: Status 404 returned error can't find the container with id ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05 Jan 21 06:50:04 crc kubenswrapper[4913]: I0121 06:50:04.141722 4913 generic.go:334] "Generic (PLEG): container finished" podID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerID="a300cf1311edbe2e8917475c52980c805a2b63750a7dc8830211bf50b92e71c9" exitCode=0 Jan 21 06:50:04 crc kubenswrapper[4913]: I0121 06:50:04.143519 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"a300cf1311edbe2e8917475c52980c805a2b63750a7dc8830211bf50b92e71c9"} Jan 21 06:50:04 crc kubenswrapper[4913]: I0121 06:50:04.143565 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerStarted","Data":"ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05"} Jan 21 06:50:06 crc kubenswrapper[4913]: I0121 06:50:06.817132 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:06 crc kubenswrapper[4913]: I0121 06:50:06.817466 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:06 crc kubenswrapper[4913]: I0121 06:50:06.858717 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.057345 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.058586 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.060407 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.062033 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-server-conf" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.062040 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-plugins-conf" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.062275 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-server-dockercfg-x86ft" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.064335 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-default-user" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.071296 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186851 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186900 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186936 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186970 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187028 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187220 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187338 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187571 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.223619 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289285 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289357 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289381 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289412 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289443 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289462 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289502 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289555 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289996 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.290038 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.290264 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.291945 4913 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.291972 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e2837abecefc96eeb2280c3452599d3ba232e1d6e20df970e5810ccca23e04e/globalmount\"" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.297113 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.297396 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.298898 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.308805 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.318338 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.373736 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.807788 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:50:07 crc kubenswrapper[4913]: W0121 06:50:07.811437 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b3d506_1b7a_4e74_8e75_bd5ad371a3e7.slice/crio-f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385 WatchSource:0}: Error finding container f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385: Status 404 returned error can't find the container with id f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385 Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.985626 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.985676 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.179371 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerStarted","Data":"f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385"} Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.181628 4913 generic.go:334] "Generic (PLEG): container finished" podID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerID="3aa48f0fbcec5babec897d165283dcc38b0310a8438e263618189600feb1cda5" exitCode=0 Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.181714 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"3aa48f0fbcec5babec897d165283dcc38b0310a8438e263618189600feb1cda5"} Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.319430 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.319718 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:50:09 crc kubenswrapper[4913]: I0121 06:50:09.042333 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qvqfr" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" probeResult="failure" output=< Jan 21 06:50:09 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:50:09 crc kubenswrapper[4913]: > Jan 21 06:50:09 crc kubenswrapper[4913]: I0121 06:50:09.195832 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerStarted","Data":"8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f"} Jan 21 06:50:09 crc kubenswrapper[4913]: I0121 06:50:09.222271 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngbvn" podStartSLOduration=2.638537919 podStartE2EDuration="7.22225414s" podCreationTimestamp="2026-01-21 06:50:02 +0000 UTC" firstStartedPulling="2026-01-21 06:50:04.144173641 +0000 UTC m=+893.940533354" lastFinishedPulling="2026-01-21 06:50:08.727889902 +0000 UTC m=+898.524249575" observedRunningTime="2026-01-21 06:50:09.219045793 +0000 UTC m=+899.015405486" watchObservedRunningTime="2026-01-21 06:50:09.22225414 +0000 UTC m=+899.018613813" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.661853 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.662909 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.665143 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-ldzbh" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.668567 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.686055 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"keystone-operator-index-nvvrn\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.805311 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"keystone-operator-index-nvvrn\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.833921 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"keystone-operator-index-nvvrn\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.980901 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.980956 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.984359 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:13 crc kubenswrapper[4913]: I0121 06:50:13.058791 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:13 crc kubenswrapper[4913]: I0121 06:50:13.291117 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:15 crc kubenswrapper[4913]: I0121 06:50:15.449358 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:50:15 crc kubenswrapper[4913]: I0121 06:50:15.449909 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xvbq" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" containerID="cri-o://29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" gracePeriod=2 Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.820206 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.822788 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.825412 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.825524 4913 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-9xvbq" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.052255 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.118681 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.457998 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.458374 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngbvn" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" containerID="cri-o://8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f" gracePeriod=2 Jan 21 06:50:19 crc kubenswrapper[4913]: I0121 06:50:19.276374 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9xvbq_f63b56df-1330-4e60-8eb8-000cdd3d6a19/registry-server/0.log" Jan 21 06:50:19 crc kubenswrapper[4913]: I0121 06:50:19.277307 4913 generic.go:334] "Generic (PLEG): container finished" podID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" exitCode=137 Jan 21 06:50:19 crc kubenswrapper[4913]: I0121 06:50:19.277349 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337"} Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.295184 4913 generic.go:334] "Generic (PLEG): container finished" podID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerID="8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f" exitCode=0 Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.295265 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f"} Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.329826 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="rabbitmq:4.1.1-management" Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.329871 4913 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="rabbitmq:4.1.1-management" Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.329987 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:rabbitmq:4.1.1-management,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgplc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_cinder-kuttl-tests(05b3d506-1b7a-4e74-8e75-bd5ad371a3e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.331049 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.355644 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.357479 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9xvbq_f63b56df-1330-4e60-8eb8-000cdd3d6a19/registry-server/0.log" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.359499 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.538063 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"aa7f103e-4216-4cf5-b6a7-42b907744bba\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.539278 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities" (OuterVolumeSpecName: "utilities") pod "aa7f103e-4216-4cf5-b6a7-42b907744bba" (UID: "aa7f103e-4216-4cf5-b6a7-42b907744bba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.539460 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"aa7f103e-4216-4cf5-b6a7-42b907744bba\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.540569 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"aa7f103e-4216-4cf5-b6a7-42b907744bba\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.541804 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.541883 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.541913 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.542344 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.543485 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities" (OuterVolumeSpecName: "utilities") pod "f63b56df-1330-4e60-8eb8-000cdd3d6a19" (UID: "f63b56df-1330-4e60-8eb8-000cdd3d6a19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.546309 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2" (OuterVolumeSpecName: "kube-api-access-ddpl2") pod "f63b56df-1330-4e60-8eb8-000cdd3d6a19" (UID: "f63b56df-1330-4e60-8eb8-000cdd3d6a19"). InnerVolumeSpecName "kube-api-access-ddpl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.546407 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5" (OuterVolumeSpecName: "kube-api-access-h7pn5") pod "aa7f103e-4216-4cf5-b6a7-42b907744bba" (UID: "aa7f103e-4216-4cf5-b6a7-42b907744bba"). InnerVolumeSpecName "kube-api-access-h7pn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.572026 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa7f103e-4216-4cf5-b6a7-42b907744bba" (UID: "aa7f103e-4216-4cf5-b6a7-42b907744bba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.618238 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f63b56df-1330-4e60-8eb8-000cdd3d6a19" (UID: "f63b56df-1330-4e60-8eb8-000cdd3d6a19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643494 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643538 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643560 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643571 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643581 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.721309 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.306777 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05"} Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.306820 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.307120 4913 scope.go:117] "RemoveContainer" containerID="8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.310398 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerStarted","Data":"13b2addf8c21bece7103dc74546b4b535b876e4503f539b9501595a1b88972a6"} Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.315921 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9xvbq_f63b56df-1330-4e60-8eb8-000cdd3d6a19/registry-server/0.log" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.321310 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07"} Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.321490 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:22 crc kubenswrapper[4913]: E0121 06:50:22.324992 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"rabbitmq:4.1.1-management\\\"\"" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.366613 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.373469 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.383236 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.389504 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.446284 4913 scope.go:117] "RemoveContainer" containerID="3aa48f0fbcec5babec897d165283dcc38b0310a8438e263618189600feb1cda5" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.460565 4913 scope.go:117] "RemoveContainer" containerID="a300cf1311edbe2e8917475c52980c805a2b63750a7dc8830211bf50b92e71c9" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.490630 4913 scope.go:117] "RemoveContainer" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.513033 4913 scope.go:117] "RemoveContainer" containerID="7cfbbb260090cca3cda10b88b0e9c196363805754f66ac431ea64beb1eda9291" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.538873 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" path="/var/lib/kubelet/pods/aa7f103e-4216-4cf5-b6a7-42b907744bba/volumes" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.539682 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" path="/var/lib/kubelet/pods/f63b56df-1330-4e60-8eb8-000cdd3d6a19/volumes" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.543292 4913 scope.go:117] "RemoveContainer" containerID="bc17a3b4e7007c1a6edef6ce6cf75a6a6d868472071ad902e9b4ecff476c27f7" Jan 21 06:50:23 crc kubenswrapper[4913]: I0121 06:50:23.329355 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerStarted","Data":"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741"} Jan 21 06:50:23 crc kubenswrapper[4913]: I0121 06:50:23.356800 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-nvvrn" podStartSLOduration=10.593800254 podStartE2EDuration="11.356767337s" podCreationTimestamp="2026-01-21 06:50:12 +0000 UTC" firstStartedPulling="2026-01-21 06:50:21.736337877 +0000 UTC m=+911.532697550" lastFinishedPulling="2026-01-21 06:50:22.49930496 +0000 UTC m=+912.295664633" observedRunningTime="2026-01-21 06:50:23.349663544 +0000 UTC m=+913.146023257" watchObservedRunningTime="2026-01-21 06:50:23.356767337 +0000 UTC m=+913.153127050" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061067 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061689 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061705 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061717 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061727 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061741 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061750 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061769 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061778 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061794 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061803 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061813 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061821 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061963 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061975 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.063128 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.075988 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.207330 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.207414 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.207511 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309111 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309177 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309232 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309749 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309828 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.337983 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.395720 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.451769 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.452082 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvqfr" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" containerID="cri-o://158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" gracePeriod=2 Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.668927 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.843758 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.020115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"bdfc921b-6ade-4913-afd4-4b75ebcead15\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.020414 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"bdfc921b-6ade-4913-afd4-4b75ebcead15\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.020443 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"bdfc921b-6ade-4913-afd4-4b75ebcead15\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.021128 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities" (OuterVolumeSpecName: "utilities") pod "bdfc921b-6ade-4913-afd4-4b75ebcead15" (UID: "bdfc921b-6ade-4913-afd4-4b75ebcead15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.025767 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm" (OuterVolumeSpecName: "kube-api-access-lbljm") pod "bdfc921b-6ade-4913-afd4-4b75ebcead15" (UID: "bdfc921b-6ade-4913-afd4-4b75ebcead15"). InnerVolumeSpecName "kube-api-access-lbljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.122002 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.122043 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.126873 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdfc921b-6ade-4913-afd4-4b75ebcead15" (UID: "bdfc921b-6ade-4913-afd4-4b75ebcead15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.224266 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364242 4913 generic.go:334] "Generic (PLEG): container finished" podID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" exitCode=0 Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364305 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364331 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364347 4913 scope.go:117] "RemoveContainer" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364472 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.367787 4913 generic.go:334] "Generic (PLEG): container finished" podID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" exitCode=0 Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.367854 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.367901 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerStarted","Data":"054f662824bbecf02694f4aaa504011845fb096af1a535fac9264ca69281bc4f"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.385821 4913 scope.go:117] "RemoveContainer" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.409856 4913 scope.go:117] "RemoveContainer" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.415839 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.427907 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.449457 4913 scope.go:117] "RemoveContainer" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" Jan 21 06:50:27 crc kubenswrapper[4913]: E0121 06:50:27.449989 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e\": container with ID starting with 158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e not found: ID does not exist" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450040 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e"} err="failed to get container status \"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e\": rpc error: code = NotFound desc = could not find container \"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e\": container with ID starting with 158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e not found: ID does not exist" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450070 4913 scope.go:117] "RemoveContainer" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" Jan 21 06:50:27 crc kubenswrapper[4913]: E0121 06:50:27.450692 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377\": container with ID starting with 4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377 not found: ID does not exist" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450746 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377"} err="failed to get container status \"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377\": rpc error: code = NotFound desc = could not find container \"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377\": container with ID starting with 4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377 not found: ID does not exist" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450779 4913 scope.go:117] "RemoveContainer" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" Jan 21 06:50:27 crc kubenswrapper[4913]: E0121 06:50:27.451220 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498\": container with ID starting with 2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498 not found: ID does not exist" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.451249 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498"} err="failed to get container status \"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498\": rpc error: code = NotFound desc = could not find container \"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498\": container with ID starting with 2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498 not found: ID does not exist" Jan 21 06:50:28 crc kubenswrapper[4913]: I0121 06:50:28.375823 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerStarted","Data":"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6"} Jan 21 06:50:28 crc kubenswrapper[4913]: I0121 06:50:28.534658 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" path="/var/lib/kubelet/pods/bdfc921b-6ade-4913-afd4-4b75ebcead15/volumes" Jan 21 06:50:29 crc kubenswrapper[4913]: I0121 06:50:29.394675 4913 generic.go:334] "Generic (PLEG): container finished" podID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" exitCode=0 Jan 21 06:50:29 crc kubenswrapper[4913]: I0121 06:50:29.394811 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6"} Jan 21 06:50:30 crc kubenswrapper[4913]: I0121 06:50:30.405127 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerStarted","Data":"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633"} Jan 21 06:50:30 crc kubenswrapper[4913]: I0121 06:50:30.441260 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hc6rw" podStartSLOduration=1.905263704 podStartE2EDuration="4.441228665s" podCreationTimestamp="2026-01-21 06:50:26 +0000 UTC" firstStartedPulling="2026-01-21 06:50:27.372919695 +0000 UTC m=+917.169279368" lastFinishedPulling="2026-01-21 06:50:29.908884636 +0000 UTC m=+919.705244329" observedRunningTime="2026-01-21 06:50:30.43074945 +0000 UTC m=+920.227109173" watchObservedRunningTime="2026-01-21 06:50:30.441228665 +0000 UTC m=+920.237588388" Jan 21 06:50:32 crc kubenswrapper[4913]: I0121 06:50:32.984462 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:32 crc kubenswrapper[4913]: I0121 06:50:32.984820 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:33 crc kubenswrapper[4913]: I0121 06:50:33.016689 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:33 crc kubenswrapper[4913]: I0121 06:50:33.456986 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.395916 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.396303 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.459046 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.521551 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:38 crc kubenswrapper[4913]: I0121 06:50:38.319417 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:50:38 crc kubenswrapper[4913]: I0121 06:50:38.319875 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:50:39 crc kubenswrapper[4913]: I0121 06:50:39.474444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerStarted","Data":"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d"} Jan 21 06:50:42 crc kubenswrapper[4913]: I0121 06:50:42.454221 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:42 crc kubenswrapper[4913]: I0121 06:50:42.455222 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hc6rw" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" containerID="cri-o://79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" gracePeriod=2 Jan 21 06:50:42 crc kubenswrapper[4913]: I0121 06:50:42.956881 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.058822 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"51f64f62-a622-4f16-9931-159d25ea6a0d\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.058885 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"51f64f62-a622-4f16-9931-159d25ea6a0d\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.058961 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"51f64f62-a622-4f16-9931-159d25ea6a0d\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.059774 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities" (OuterVolumeSpecName: "utilities") pod "51f64f62-a622-4f16-9931-159d25ea6a0d" (UID: "51f64f62-a622-4f16-9931-159d25ea6a0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.064682 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx" (OuterVolumeSpecName: "kube-api-access-b45qx") pod "51f64f62-a622-4f16-9931-159d25ea6a0d" (UID: "51f64f62-a622-4f16-9931-159d25ea6a0d"). InnerVolumeSpecName "kube-api-access-b45qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.121461 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51f64f62-a622-4f16-9931-159d25ea6a0d" (UID: "51f64f62-a622-4f16-9931-159d25ea6a0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.160217 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.160256 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.160274 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504306 4913 generic.go:334] "Generic (PLEG): container finished" podID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" exitCode=0 Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504353 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633"} Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504374 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504394 4913 scope.go:117] "RemoveContainer" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504382 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"054f662824bbecf02694f4aaa504011845fb096af1a535fac9264ca69281bc4f"} Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.541067 4913 scope.go:117] "RemoveContainer" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.541082 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.548274 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.567443 4913 scope.go:117] "RemoveContainer" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.584883 4913 scope.go:117] "RemoveContainer" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" Jan 21 06:50:43 crc kubenswrapper[4913]: E0121 06:50:43.585284 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633\": container with ID starting with 79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633 not found: ID does not exist" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585321 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633"} err="failed to get container status \"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633\": rpc error: code = NotFound desc = could not find container \"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633\": container with ID starting with 79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633 not found: ID does not exist" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585341 4913 scope.go:117] "RemoveContainer" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" Jan 21 06:50:43 crc kubenswrapper[4913]: E0121 06:50:43.585644 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6\": container with ID starting with 28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6 not found: ID does not exist" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585708 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6"} err="failed to get container status \"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6\": rpc error: code = NotFound desc = could not find container \"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6\": container with ID starting with 28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6 not found: ID does not exist" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585748 4913 scope.go:117] "RemoveContainer" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" Jan 21 06:50:43 crc kubenswrapper[4913]: E0121 06:50:43.586110 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3\": container with ID starting with b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3 not found: ID does not exist" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.586133 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3"} err="failed to get container status \"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3\": rpc error: code = NotFound desc = could not find container \"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3\": container with ID starting with b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3 not found: ID does not exist" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.544299 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" path="/var/lib/kubelet/pods/51f64f62-a622-4f16-9931-159d25ea6a0d/volumes" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.718944 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719211 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719226 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719241 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719249 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719262 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719280 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719296 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719305 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719326 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719334 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719348 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719355 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719486 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719504 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.720902 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.723022 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.728410 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.901176 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.901231 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.901307 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.002936 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003003 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003081 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003519 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003697 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.030296 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.050417 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.512669 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.530221 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerStarted","Data":"2b9ab5e998a7f78e1348156646ad20a9c7210ce8c6d3c88a0c223b6660c003b2"} Jan 21 06:50:46 crc kubenswrapper[4913]: I0121 06:50:46.540687 4913 generic.go:334] "Generic (PLEG): container finished" podID="e3feb49b-10bf-4116-91b9-e9b726161892" containerID="f64e19c7af4171a78023ca3711a7eec83f0f3b9547ff3c69e634b90c2c0582db" exitCode=0 Jan 21 06:50:46 crc kubenswrapper[4913]: I0121 06:50:46.540794 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"f64e19c7af4171a78023ca3711a7eec83f0f3b9547ff3c69e634b90c2c0582db"} Jan 21 06:50:47 crc kubenswrapper[4913]: I0121 06:50:47.551436 4913 generic.go:334] "Generic (PLEG): container finished" podID="e3feb49b-10bf-4116-91b9-e9b726161892" containerID="60ac37c77e23483afc0614ffbcd77f3112a7195bb5009179ec07fc76cbf42d75" exitCode=0 Jan 21 06:50:47 crc kubenswrapper[4913]: I0121 06:50:47.551492 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"60ac37c77e23483afc0614ffbcd77f3112a7195bb5009179ec07fc76cbf42d75"} Jan 21 06:50:48 crc kubenswrapper[4913]: I0121 06:50:48.563645 4913 generic.go:334] "Generic (PLEG): container finished" podID="e3feb49b-10bf-4116-91b9-e9b726161892" containerID="461bda799565e5924857f0b3e4f758b75acec0c9a9a9ac5312facf66ecd33abe" exitCode=0 Jan 21 06:50:48 crc kubenswrapper[4913]: I0121 06:50:48.563711 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"461bda799565e5924857f0b3e4f758b75acec0c9a9a9ac5312facf66ecd33abe"} Jan 21 06:50:49 crc kubenswrapper[4913]: I0121 06:50:49.963898 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.077039 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"e3feb49b-10bf-4116-91b9-e9b726161892\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.077300 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"e3feb49b-10bf-4116-91b9-e9b726161892\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.077837 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"e3feb49b-10bf-4116-91b9-e9b726161892\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.079360 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle" (OuterVolumeSpecName: "bundle") pod "e3feb49b-10bf-4116-91b9-e9b726161892" (UID: "e3feb49b-10bf-4116-91b9-e9b726161892"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.085848 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994" (OuterVolumeSpecName: "kube-api-access-99994") pod "e3feb49b-10bf-4116-91b9-e9b726161892" (UID: "e3feb49b-10bf-4116-91b9-e9b726161892"). InnerVolumeSpecName "kube-api-access-99994". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.098217 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util" (OuterVolumeSpecName: "util") pod "e3feb49b-10bf-4116-91b9-e9b726161892" (UID: "e3feb49b-10bf-4116-91b9-e9b726161892"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.179374 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.179411 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.179427 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.583743 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"2b9ab5e998a7f78e1348156646ad20a9c7210ce8c6d3c88a0c223b6660c003b2"} Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.583798 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9ab5e998a7f78e1348156646ad20a9c7210ce8c6d3c88a0c223b6660c003b2" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.583830 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.541237 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:51:00 crc kubenswrapper[4913]: E0121 06:51:00.542872 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="util" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.542941 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="util" Jan 21 06:51:00 crc kubenswrapper[4913]: E0121 06:51:00.543006 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="pull" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543059 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="pull" Jan 21 06:51:00 crc kubenswrapper[4913]: E0121 06:51:00.543123 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="extract" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543181 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="extract" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543341 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="extract" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543806 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.546379 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.546518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9v6xc" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.553120 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.730325 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.730369 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.730545 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.831424 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.831498 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.831526 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.836522 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.841907 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.859813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.868020 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:01 crc kubenswrapper[4913]: I0121 06:51:01.382892 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:51:01 crc kubenswrapper[4913]: I0121 06:51:01.664731 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerStarted","Data":"5f11053bf6e8005edf5c878b1053cb5b2f458f735b16ba02d777871ab59cfd24"} Jan 21 06:51:05 crc kubenswrapper[4913]: I0121 06:51:05.698437 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerStarted","Data":"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2"} Jan 21 06:51:05 crc kubenswrapper[4913]: I0121 06:51:05.699181 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:05 crc kubenswrapper[4913]: I0121 06:51:05.724833 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" podStartSLOduration=1.762090171 podStartE2EDuration="5.72481513s" podCreationTimestamp="2026-01-21 06:51:00 +0000 UTC" firstStartedPulling="2026-01-21 06:51:01.392389693 +0000 UTC m=+951.188749366" lastFinishedPulling="2026-01-21 06:51:05.355114652 +0000 UTC m=+955.151474325" observedRunningTime="2026-01-21 06:51:05.720519623 +0000 UTC m=+955.516879366" watchObservedRunningTime="2026-01-21 06:51:05.72481513 +0000 UTC m=+955.521174803" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.319658 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.320210 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.320306 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.321378 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.321522 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908" gracePeriod=600 Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722450 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908" exitCode=0 Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722513 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908"} Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722702 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6"} Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722723 4913 scope.go:117] "RemoveContainer" containerID="9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3" Jan 21 06:51:10 crc kubenswrapper[4913]: I0121 06:51:10.876126 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:11 crc kubenswrapper[4913]: I0121 06:51:11.752121 4913 generic.go:334] "Generic (PLEG): container finished" podID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" exitCode=0 Jan 21 06:51:11 crc kubenswrapper[4913]: I0121 06:51:11.752180 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerDied","Data":"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d"} Jan 21 06:51:12 crc kubenswrapper[4913]: I0121 06:51:12.772107 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerStarted","Data":"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54"} Jan 21 06:51:12 crc kubenswrapper[4913]: I0121 06:51:12.773172 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:51:12 crc kubenswrapper[4913]: I0121 06:51:12.796817 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/rabbitmq-server-0" podStartSLOduration=37.117300665 podStartE2EDuration="1m6.796802856s" podCreationTimestamp="2026-01-21 06:50:06 +0000 UTC" firstStartedPulling="2026-01-21 06:50:07.813826122 +0000 UTC m=+897.610185795" lastFinishedPulling="2026-01-21 06:50:37.493328313 +0000 UTC m=+927.289687986" observedRunningTime="2026-01-21 06:51:12.794097164 +0000 UTC m=+962.590456837" watchObservedRunningTime="2026-01-21 06:51:12.796802856 +0000 UTC m=+962.593162529" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.659849 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.662281 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.664565 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-index-dockercfg-c9j95" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.673558 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.743424 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"cinder-operator-index-4jlfb\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.845657 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"cinder-operator-index-4jlfb\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.871448 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"cinder-operator-index-4jlfb\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.998711 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:16 crc kubenswrapper[4913]: I0121 06:51:16.474542 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:51:16 crc kubenswrapper[4913]: W0121 06:51:16.475215 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f61c697_fbcc_4e33_929b_03eacd477d73.slice/crio-d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6 WatchSource:0}: Error finding container d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6: Status 404 returned error can't find the container with id d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6 Jan 21 06:51:16 crc kubenswrapper[4913]: I0121 06:51:16.477057 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:51:16 crc kubenswrapper[4913]: I0121 06:51:16.819549 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerStarted","Data":"d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6"} Jan 21 06:51:18 crc kubenswrapper[4913]: I0121 06:51:18.835938 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerStarted","Data":"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d"} Jan 21 06:51:18 crc kubenswrapper[4913]: I0121 06:51:18.866498 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-index-4jlfb" podStartSLOduration=2.157842719 podStartE2EDuration="3.866480861s" podCreationTimestamp="2026-01-21 06:51:15 +0000 UTC" firstStartedPulling="2026-01-21 06:51:16.476837578 +0000 UTC m=+966.273197251" lastFinishedPulling="2026-01-21 06:51:18.18547571 +0000 UTC m=+967.981835393" observedRunningTime="2026-01-21 06:51:18.862857933 +0000 UTC m=+968.659217646" watchObservedRunningTime="2026-01-21 06:51:18.866480861 +0000 UTC m=+968.662840534" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:25.999543 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:26.000343 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:26.041218 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:26.935476 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:27 crc kubenswrapper[4913]: I0121 06:51:27.378722 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.113406 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.116289 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.127100 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.129362 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.256201 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.256826 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.257080 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.358834 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.358923 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.359091 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.359758 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.360082 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.393120 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.452336 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.918903 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:51:31 crc kubenswrapper[4913]: I0121 06:51:31.972036 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerStarted","Data":"3f2f464ed3abf9f52d4aa03e74d6bb9a3616be90a49c8face21759e38f52702f"} Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.353428 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.354864 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.358879 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-db-secret" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.359971 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.361062 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.363925 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.368577 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.508465 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.508954 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.509185 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.509450 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.610789 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.611123 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.611212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.611368 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.612159 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.612970 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.636211 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.642571 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.692850 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.708691 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.029818 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.173152 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:51:34 crc kubenswrapper[4913]: W0121 06:51:34.183789 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ce82f18_1e1d_40f1_8207_428ea9445bc3.slice/crio-a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63 WatchSource:0}: Error finding container a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63: Status 404 returned error can't find the container with id a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63 Jan 21 06:51:34 crc kubenswrapper[4913]: E0121 06:51:34.598745 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ea35b3_9885_4acc_bed4_05b6213940be.slice/crio-e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.998687 4913 generic.go:334] "Generic (PLEG): container finished" podID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerID="b5c65ed731440220892793dc0f5f5c1250a99d03d67a71b6685779fcad076adc" exitCode=0 Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.998777 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" event={"ID":"8ce82f18-1e1d-40f1-8207-428ea9445bc3","Type":"ContainerDied","Data":"b5c65ed731440220892793dc0f5f5c1250a99d03d67a71b6685779fcad076adc"} Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.999314 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" event={"ID":"8ce82f18-1e1d-40f1-8207-428ea9445bc3","Type":"ContainerStarted","Data":"a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63"} Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.002198 4913 generic.go:334] "Generic (PLEG): container finished" podID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerID="659f77de9cd51e636b4491f066c76f374e6bc4986d8367fb979a0e570225f47e" exitCode=0 Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.002290 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" event={"ID":"15e33604-9af2-42b5-b1ad-ecd76d4898d4","Type":"ContainerDied","Data":"659f77de9cd51e636b4491f066c76f374e6bc4986d8367fb979a0e570225f47e"} Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.002311 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" event={"ID":"15e33604-9af2-42b5-b1ad-ecd76d4898d4","Type":"ContainerStarted","Data":"e10ac755e324f7bd2a233b427bda4103fca92b78e293af0ac59ee5dd08183dc8"} Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.006066 4913 generic.go:334] "Generic (PLEG): container finished" podID="01ea35b3-9885-4acc-bed4-05b6213940be" containerID="e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680" exitCode=0 Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.006121 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680"} Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.018088 4913 generic.go:334] "Generic (PLEG): container finished" podID="01ea35b3-9885-4acc-bed4-05b6213940be" containerID="57ae751f0ac8e317da709793a9908e104d2805bb250e026f326c599fd971bccb" exitCode=0 Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.018212 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"57ae751f0ac8e317da709793a9908e104d2805bb250e026f326c599fd971bccb"} Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.376874 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.459485 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.459583 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.460479 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ce82f18-1e1d-40f1-8207-428ea9445bc3" (UID: "8ce82f18-1e1d-40f1-8207-428ea9445bc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.467385 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4" (OuterVolumeSpecName: "kube-api-access-wcpg4") pod "8ce82f18-1e1d-40f1-8207-428ea9445bc3" (UID: "8ce82f18-1e1d-40f1-8207-428ea9445bc3"). InnerVolumeSpecName "kube-api-access-wcpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.469236 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561092 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561188 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561544 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561575 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.562285 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15e33604-9af2-42b5-b1ad-ecd76d4898d4" (UID: "15e33604-9af2-42b5-b1ad-ecd76d4898d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.566209 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t" (OuterVolumeSpecName: "kube-api-access-c598t") pod "15e33604-9af2-42b5-b1ad-ecd76d4898d4" (UID: "15e33604-9af2-42b5-b1ad-ecd76d4898d4"). InnerVolumeSpecName "kube-api-access-c598t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.663537 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.663616 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.032583 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.032577 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" event={"ID":"8ce82f18-1e1d-40f1-8207-428ea9445bc3","Type":"ContainerDied","Data":"a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63"} Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.032793 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.035565 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" event={"ID":"15e33604-9af2-42b5-b1ad-ecd76d4898d4","Type":"ContainerDied","Data":"e10ac755e324f7bd2a233b427bda4103fca92b78e293af0ac59ee5dd08183dc8"} Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.035619 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.035646 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10ac755e324f7bd2a233b427bda4103fca92b78e293af0ac59ee5dd08183dc8" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.039282 4913 generic.go:334] "Generic (PLEG): container finished" podID="01ea35b3-9885-4acc-bed4-05b6213940be" containerID="92a35170c3a228e725dc4577bc820bf64539f262c32a337b31f25d4c32fe9af7" exitCode=0 Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.039337 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"92a35170c3a228e725dc4577bc820bf64539f262c32a337b31f25d4c32fe9af7"} Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.327539 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.488322 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"01ea35b3-9885-4acc-bed4-05b6213940be\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.488454 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"01ea35b3-9885-4acc-bed4-05b6213940be\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.488533 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"01ea35b3-9885-4acc-bed4-05b6213940be\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.490905 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle" (OuterVolumeSpecName: "bundle") pod "01ea35b3-9885-4acc-bed4-05b6213940be" (UID: "01ea35b3-9885-4acc-bed4-05b6213940be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.494481 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4" (OuterVolumeSpecName: "kube-api-access-zwlh4") pod "01ea35b3-9885-4acc-bed4-05b6213940be" (UID: "01ea35b3-9885-4acc-bed4-05b6213940be"). InnerVolumeSpecName "kube-api-access-zwlh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.505488 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util" (OuterVolumeSpecName: "util") pod "01ea35b3-9885-4acc-bed4-05b6213940be" (UID: "01ea35b3-9885-4acc-bed4-05b6213940be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.589920 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.589961 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.589973 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850109 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850769 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="pull" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850812 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="pull" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850839 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="extract" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850857 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="extract" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850895 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerName="mariadb-account-create-update" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850914 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerName="mariadb-account-create-update" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850941 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="util" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850957 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="util" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850995 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerName="mariadb-database-create" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851011 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerName="mariadb-database-create" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851272 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerName="mariadb-account-create-update" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851304 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerName="mariadb-database-create" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851325 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="extract" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.852209 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.854862 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.858031 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-c5jhw" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.858344 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.858997 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.871502 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.995966 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.996059 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.057523 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"3f2f464ed3abf9f52d4aa03e74d6bb9a3616be90a49c8face21759e38f52702f"} Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.057565 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2f464ed3abf9f52d4aa03e74d6bb9a3616be90a49c8face21759e38f52702f" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.057684 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.097722 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.097845 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.102294 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.113690 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.190462 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.686389 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:51:40 crc kubenswrapper[4913]: I0121 06:51:40.067161 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerStarted","Data":"6931bb12a1c9b42fed5b142341ed107474b2250142986fb9d99419b2528a5a14"} Jan 21 06:51:47 crc kubenswrapper[4913]: I0121 06:51:47.124491 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerStarted","Data":"a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1"} Jan 21 06:51:47 crc kubenswrapper[4913]: I0121 06:51:47.154384 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" podStartSLOduration=2.040497727 podStartE2EDuration="9.15436201s" podCreationTimestamp="2026-01-21 06:51:38 +0000 UTC" firstStartedPulling="2026-01-21 06:51:39.705771344 +0000 UTC m=+989.502131017" lastFinishedPulling="2026-01-21 06:51:46.819635627 +0000 UTC m=+996.615995300" observedRunningTime="2026-01-21 06:51:47.146872138 +0000 UTC m=+996.943231821" watchObservedRunningTime="2026-01-21 06:51:47.15436201 +0000 UTC m=+996.950721693" Jan 21 06:51:51 crc kubenswrapper[4913]: I0121 06:51:51.149891 4913 generic.go:334] "Generic (PLEG): container finished" podID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerID="a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1" exitCode=0 Jan 21 06:51:51 crc kubenswrapper[4913]: I0121 06:51:51.150026 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerDied","Data":"a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1"} Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.532204 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.593082 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"345b0465-d6ca-45e5-bd9d-47a6adacb366\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.593156 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"345b0465-d6ca-45e5-bd9d-47a6adacb366\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.618884 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4" (OuterVolumeSpecName: "kube-api-access-wsvn4") pod "345b0465-d6ca-45e5-bd9d-47a6adacb366" (UID: "345b0465-d6ca-45e5-bd9d-47a6adacb366"). InnerVolumeSpecName "kube-api-access-wsvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.693237 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data" (OuterVolumeSpecName: "config-data") pod "345b0465-d6ca-45e5-bd9d-47a6adacb366" (UID: "345b0465-d6ca-45e5-bd9d-47a6adacb366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.695011 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.695040 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.165265 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerDied","Data":"6931bb12a1c9b42fed5b142341ed107474b2250142986fb9d99419b2528a5a14"} Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.165299 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6931bb12a1c9b42fed5b142341ed107474b2250142986fb9d99419b2528a5a14" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.165326 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.363315 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:51:53 crc kubenswrapper[4913]: E0121 06:51:53.363542 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerName="keystone-db-sync" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.363553 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerName="keystone-db-sync" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.363683 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerName="keystone-db-sync" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.364119 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366069 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366839 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366950 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"osp-secret" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366883 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.371381 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-c5jhw" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.389786 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405228 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405281 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405304 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405331 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405354 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.506934 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507006 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507034 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507058 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507081 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.510470 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.510881 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.510964 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.512902 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.533047 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.683632 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:54 crc kubenswrapper[4913]: I0121 06:51:54.138880 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:51:54 crc kubenswrapper[4913]: I0121 06:51:54.174241 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerStarted","Data":"075d4a9a9bb4a3c21cd06c76917b83915cf7f052402d9e8109d8ea058367eccd"} Jan 21 06:51:55 crc kubenswrapper[4913]: I0121 06:51:55.183189 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerStarted","Data":"65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3"} Jan 21 06:51:57 crc kubenswrapper[4913]: I0121 06:51:57.197918 4913 generic.go:334] "Generic (PLEG): container finished" podID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerID="65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3" exitCode=0 Jan 21 06:51:57 crc kubenswrapper[4913]: I0121 06:51:57.198120 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerDied","Data":"65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3"} Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.475623 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.477868 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.483414 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wlhbb" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.483602 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-service-cert" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.500141 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.567420 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.588474 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.588568 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.588626 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689510 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689666 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689708 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689727 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689780 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.690013 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.691097 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.691148 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.697759 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.698343 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts" (OuterVolumeSpecName: "scripts") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.698404 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96" (OuterVolumeSpecName: "kube-api-access-p4c96") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "kube-api-access-p4c96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.703840 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.706405 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.706469 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.713080 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data" (OuterVolumeSpecName: "config-data") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.736301 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792852 4913 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792897 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792910 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792925 4913 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792935 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.868944 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.117625 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.210725 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.210723 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerDied","Data":"075d4a9a9bb4a3c21cd06c76917b83915cf7f052402d9e8109d8ea058367eccd"} Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.210879 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075d4a9a9bb4a3c21cd06c76917b83915cf7f052402d9e8109d8ea058367eccd" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.212457 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerStarted","Data":"f82866f4a640b05de032b2242387c51f49628b80b3fcaf42729718719aa9d672"} Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.305178 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:51:59 crc kubenswrapper[4913]: E0121 06:51:59.305486 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerName="keystone-bootstrap" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.305511 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerName="keystone-bootstrap" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.305692 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerName="keystone-bootstrap" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.306172 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310243 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310512 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310677 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310804 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-c5jhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.318278 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407005 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407084 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407149 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407187 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407210 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512526 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512621 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512678 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512760 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512843 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.517272 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.517286 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.517407 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.522325 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.530234 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.622278 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:52:00 crc kubenswrapper[4913]: I0121 06:52:00.055650 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.229677 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerStarted","Data":"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a"} Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.230061 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.231166 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerStarted","Data":"d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588"} Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.231222 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerStarted","Data":"319d2fc6458bad5a006b1117b9ecf9841ebe516000026a3a782671bea30c10cd"} Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.231346 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.251346 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" podStartSLOduration=1.923891597 podStartE2EDuration="3.25132972s" podCreationTimestamp="2026-01-21 06:51:58 +0000 UTC" firstStartedPulling="2026-01-21 06:51:59.127422539 +0000 UTC m=+1008.923782212" lastFinishedPulling="2026-01-21 06:52:00.454860622 +0000 UTC m=+1010.251220335" observedRunningTime="2026-01-21 06:52:01.247780764 +0000 UTC m=+1011.044140477" watchObservedRunningTime="2026-01-21 06:52:01.25132972 +0000 UTC m=+1011.047689393" Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.270228 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" podStartSLOduration=2.27019848 podStartE2EDuration="2.27019848s" podCreationTimestamp="2026-01-21 06:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:01.270165749 +0000 UTC m=+1011.066525432" watchObservedRunningTime="2026-01-21 06:52:01.27019848 +0000 UTC m=+1011.066558193" Jan 21 06:52:08 crc kubenswrapper[4913]: I0121 06:52:08.872678 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.310187 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.311428 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.317106 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.404998 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.405824 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.407673 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.417895 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.428524 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.428709 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530417 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530489 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530610 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530766 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.531170 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.574431 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.631945 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.632143 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.632212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.633021 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.656584 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.719105 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.065115 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.161402 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.336210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerStarted","Data":"8a3f20a49f6d57365eb79b7bf4e963d8c25f5eb4c885817d083623c4901b1ce7"} Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.337886 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerStarted","Data":"b5393f25576b08afc85732da2f72e652c47836419a36e49a4a89ca0fdc5ced01"} Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.352418 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerStarted","Data":"adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79"} Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.353398 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerStarted","Data":"bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613"} Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.370801 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-create-88w4d" podStartSLOduration=3.370783723 podStartE2EDuration="3.370783723s" podCreationTimestamp="2026-01-21 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:17.367849213 +0000 UTC m=+1027.164208896" watchObservedRunningTime="2026-01-21 06:52:17.370783723 +0000 UTC m=+1027.167143396" Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.388026 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" podStartSLOduration=3.388006608 podStartE2EDuration="3.388006608s" podCreationTimestamp="2026-01-21 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:17.384826432 +0000 UTC m=+1027.181186125" watchObservedRunningTime="2026-01-21 06:52:17.388006608 +0000 UTC m=+1027.184366301" Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.365547 4913 generic.go:334] "Generic (PLEG): container finished" podID="84e5eed1-ff67-483b-808d-466413987e09" containerID="adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79" exitCode=0 Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.365650 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerDied","Data":"adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79"} Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.369140 4913 generic.go:334] "Generic (PLEG): container finished" podID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerID="bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613" exitCode=0 Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.369210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerDied","Data":"bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613"} Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.807258 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.817273 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909365 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"84e5eed1-ff67-483b-808d-466413987e09\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909460 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909539 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"84e5eed1-ff67-483b-808d-466413987e09\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909576 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.910288 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de97e815-d3a3-4a3d-81e2-6054f65b82f0" (UID: "de97e815-d3a3-4a3d-81e2-6054f65b82f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.910434 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84e5eed1-ff67-483b-808d-466413987e09" (UID: "84e5eed1-ff67-483b-808d-466413987e09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.914275 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j" (OuterVolumeSpecName: "kube-api-access-txp7j") pod "84e5eed1-ff67-483b-808d-466413987e09" (UID: "84e5eed1-ff67-483b-808d-466413987e09"). InnerVolumeSpecName "kube-api-access-txp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.915182 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt" (OuterVolumeSpecName: "kube-api-access-8kjpt") pod "de97e815-d3a3-4a3d-81e2-6054f65b82f0" (UID: "de97e815-d3a3-4a3d-81e2-6054f65b82f0"). InnerVolumeSpecName "kube-api-access-8kjpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011632 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011705 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011734 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011746 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.393161 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerDied","Data":"b5393f25576b08afc85732da2f72e652c47836419a36e49a4a89ca0fdc5ced01"} Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.393221 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5393f25576b08afc85732da2f72e652c47836419a36e49a4a89ca0fdc5ced01" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.393221 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.396079 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerDied","Data":"8a3f20a49f6d57365eb79b7bf4e963d8c25f5eb4c885817d083623c4901b1ce7"} Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.396147 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3f20a49f6d57365eb79b7bf4e963d8c25f5eb4c885817d083623c4901b1ce7" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.396238 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.744284 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:52:24 crc kubenswrapper[4913]: E0121 06:52:24.744901 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerName="mariadb-account-create-update" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.744914 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerName="mariadb-account-create-update" Jan 21 06:52:24 crc kubenswrapper[4913]: E0121 06:52:24.744927 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e5eed1-ff67-483b-808d-466413987e09" containerName="mariadb-database-create" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.744933 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e5eed1-ff67-483b-808d-466413987e09" containerName="mariadb-database-create" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.745053 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerName="mariadb-account-create-update" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.745073 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e5eed1-ff67-483b-808d-466413987e09" containerName="mariadb-database-create" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.745449 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.747875 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-stbww" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.749304 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.749666 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.766997 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889243 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889329 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889405 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889440 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889467 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990748 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990839 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990945 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.991022 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.991131 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.998020 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.000673 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.001393 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.024553 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.077186 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.573747 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:52:26 crc kubenswrapper[4913]: I0121 06:52:26.444644 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerStarted","Data":"43414530e40244b9d1e55c4f915e73508410496306477b92733ae02210ce7e56"} Jan 21 06:52:31 crc kubenswrapper[4913]: I0121 06:52:31.018229 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:52:40 crc kubenswrapper[4913]: I0121 06:52:40.566736 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerStarted","Data":"9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a"} Jan 21 06:52:40 crc kubenswrapper[4913]: I0121 06:52:40.589441 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" podStartSLOduration=2.698076102 podStartE2EDuration="16.589419442s" podCreationTimestamp="2026-01-21 06:52:24 +0000 UTC" firstStartedPulling="2026-01-21 06:52:25.58265337 +0000 UTC m=+1035.379013043" lastFinishedPulling="2026-01-21 06:52:39.47399672 +0000 UTC m=+1049.270356383" observedRunningTime="2026-01-21 06:52:40.585371623 +0000 UTC m=+1050.381731306" watchObservedRunningTime="2026-01-21 06:52:40.589419442 +0000 UTC m=+1050.385779235" Jan 21 06:52:45 crc kubenswrapper[4913]: I0121 06:52:45.613037 4913 generic.go:334] "Generic (PLEG): container finished" podID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerID="9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a" exitCode=0 Jan 21 06:52:45 crc kubenswrapper[4913]: I0121 06:52:45.613112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerDied","Data":"9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a"} Jan 21 06:52:46 crc kubenswrapper[4913]: I0121 06:52:46.955508 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037913 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037943 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037976 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.038007 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.038244 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.043849 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.044838 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5" (OuterVolumeSpecName: "kube-api-access-lqqs5") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "kube-api-access-lqqs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.047695 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts" (OuterVolumeSpecName: "scripts") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.091684 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data" (OuterVolumeSpecName: "config-data") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139334 4913 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139366 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139376 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139384 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139393 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.633362 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerDied","Data":"43414530e40244b9d1e55c4f915e73508410496306477b92733ae02210ce7e56"} Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.633776 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43414530e40244b9d1e55c4f915e73508410496306477b92733ae02210ce7e56" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.633447 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.978270 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:52:47 crc kubenswrapper[4913]: E0121 06:52:47.979056 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerName="cinder-db-sync" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.979081 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerName="cinder-db-sync" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.979496 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerName="cinder-db-sync" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.982921 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.988390 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-stbww" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.990182 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.990437 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.991439 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.014906 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.044289 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.045941 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.050129 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051401 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051446 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051467 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051496 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051512 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.055619 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.082643 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.084683 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.087790 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.115896 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153120 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153156 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153174 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153192 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153213 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153228 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153245 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153261 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153282 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153306 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153321 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153338 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153354 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153368 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153383 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153400 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153415 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153439 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153457 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153488 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153504 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153669 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153692 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153706 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153733 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153756 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153770 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153790 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153808 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153826 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153842 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153863 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153879 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.154827 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.170790 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.172314 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.177121 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.177189 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.245853 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.246766 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.252938 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255242 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255284 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255315 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255333 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255353 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255386 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255408 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255433 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255460 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255459 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255482 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255505 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255538 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255539 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255560 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255567 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255585 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255610 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255630 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255717 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255533 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255736 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255770 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255718 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255819 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255823 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255867 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255903 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255924 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255947 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255933 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255981 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256017 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256054 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256063 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256089 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256098 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256131 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256136 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256097 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256171 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256146 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256217 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256271 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256274 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256312 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256342 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256422 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.264187 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.264674 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.264910 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.265499 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.266394 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.269060 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.269665 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.276456 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.290922 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.307725 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357202 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357778 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357868 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357995 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.358154 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.358262 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.360915 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.420055 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459281 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459451 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459551 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459693 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459758 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459872 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459982 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.460419 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.465577 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.467221 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.474708 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.478734 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.616739 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.428187 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.462574 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: W0121 06:52:49.466184 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff633750_3c62_48b5_b977_f1b4f42b9b7e.slice/crio-e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1 WatchSource:0}: Error finding container e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1: Status 404 returned error can't find the container with id e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1 Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.524983 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: W0121 06:52:49.539711 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb256a85e_47c9_4195_9732_d58250fd3f42.slice/crio-8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf WatchSource:0}: Error finding container 8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf: Status 404 returned error can't find the container with id 8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.588687 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: W0121 06:52:49.598258 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c8afc51_5054_46d8_a16d_e07541ff4af7.slice/crio-0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465 WatchSource:0}: Error finding container 0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465: Status 404 returned error can't find the container with id 0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465 Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.645141 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerStarted","Data":"8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf"} Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.646389 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerStarted","Data":"f981181d61a91c8c4ac8291fd1a2e334c01d5ff76df7a42189afca1df38c5352"} Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.647304 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerStarted","Data":"e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1"} Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.648484 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465"} Jan 21 06:52:50 crc kubenswrapper[4913]: I0121 06:52:50.657214 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerStarted","Data":"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.678817 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerStarted","Data":"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.679310 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.681870 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerStarted","Data":"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.681943 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerStarted","Data":"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.704652 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.7046246959999998 podStartE2EDuration="3.704624696s" podCreationTimestamp="2026-01-21 06:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:51.700078103 +0000 UTC m=+1061.496437786" watchObservedRunningTime="2026-01-21 06:52:51.704624696 +0000 UTC m=+1061.500984389" Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.730519 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=3.997357781 podStartE2EDuration="4.730493133s" podCreationTimestamp="2026-01-21 06:52:47 +0000 UTC" firstStartedPulling="2026-01-21 06:52:49.443024754 +0000 UTC m=+1059.239384437" lastFinishedPulling="2026-01-21 06:52:50.176160086 +0000 UTC m=+1059.972519789" observedRunningTime="2026-01-21 06:52:51.728065977 +0000 UTC m=+1061.524425660" watchObservedRunningTime="2026-01-21 06:52:51.730493133 +0000 UTC m=+1061.526852846" Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.696000 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.696717 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.697829 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerStarted","Data":"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.697871 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerStarted","Data":"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.721091 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.567464159 podStartE2EDuration="4.721069041s" podCreationTimestamp="2026-01-21 06:52:48 +0000 UTC" firstStartedPulling="2026-01-21 06:52:49.601671889 +0000 UTC m=+1059.398031562" lastFinishedPulling="2026-01-21 06:52:51.755276761 +0000 UTC m=+1061.551636444" observedRunningTime="2026-01-21 06:52:52.719568001 +0000 UTC m=+1062.515927674" watchObservedRunningTime="2026-01-21 06:52:52.721069041 +0000 UTC m=+1062.517428724" Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.754457 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.545032853 podStartE2EDuration="4.754432339s" podCreationTimestamp="2026-01-21 06:52:48 +0000 UTC" firstStartedPulling="2026-01-21 06:52:49.544305993 +0000 UTC m=+1059.340665716" lastFinishedPulling="2026-01-21 06:52:51.753705519 +0000 UTC m=+1061.550065202" observedRunningTime="2026-01-21 06:52:52.74922697 +0000 UTC m=+1062.545586643" watchObservedRunningTime="2026-01-21 06:52:52.754432339 +0000 UTC m=+1062.550792042" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.308110 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.361412 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.421451 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.722843 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700" exitCode=1 Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.722993 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700"} Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.723516 4913 scope.go:117] "RemoveContainer" containerID="4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700" Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.737099 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56" exitCode=1 Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.737222 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56"} Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.738656 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3"} Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.737817 4913 scope.go:117] "RemoveContainer" containerID="a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56" Jan 21 06:52:55 crc kubenswrapper[4913]: I0121 06:52:55.776846 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b"} Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.793042 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" exitCode=1 Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.794741 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.794853 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.795288 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" exitCode=1 Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.793108 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b"} Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.795855 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3"} Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.795947 4913 scope.go:117] "RemoveContainer" containerID="a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56" Jan 21 06:52:56 crc kubenswrapper[4913]: E0121 06:52:56.801421 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.878285 4913 scope.go:117] "RemoveContainer" containerID="4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700" Jan 21 06:52:57 crc kubenswrapper[4913]: I0121 06:52:57.361788 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:57 crc kubenswrapper[4913]: I0121 06:52:57.808970 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:57 crc kubenswrapper[4913]: I0121 06:52:57.809004 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:57 crc kubenswrapper[4913]: E0121 06:52:57.809277 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.361750 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.361817 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.574511 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.606097 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.816497 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.816533 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:58 crc kubenswrapper[4913]: E0121 06:52:58.816927 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:52:59 crc kubenswrapper[4913]: I0121 06:52:59.824276 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:59 crc kubenswrapper[4913]: I0121 06:52:59.824624 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:59 crc kubenswrapper[4913]: E0121 06:52:59.825010 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:00 crc kubenswrapper[4913]: I0121 06:53:00.546851 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.873012 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.874920 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.887886 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976345 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976421 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976486 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976666 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976691 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077067 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077337 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077534 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077642 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077744 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077838 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.093710 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.096241 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.097464 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.107441 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.206639 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.676328 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.847181 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerStarted","Data":"b16ed940864ef946c18ba1dbbbda2af9f6fa0f1dd70aecdf23a72e769b81c37b"} Jan 21 06:53:03 crc kubenswrapper[4913]: I0121 06:53:03.859556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerStarted","Data":"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732"} Jan 21 06:53:03 crc kubenswrapper[4913]: I0121 06:53:03.859960 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerStarted","Data":"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d"} Jan 21 06:53:03 crc kubenswrapper[4913]: I0121 06:53:03.888145 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-1" podStartSLOduration=2.888126501 podStartE2EDuration="2.888126501s" podCreationTimestamp="2026-01-21 06:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:03.880478455 +0000 UTC m=+1073.676838128" watchObservedRunningTime="2026-01-21 06:53:03.888126501 +0000 UTC m=+1073.684486174" Jan 21 06:53:07 crc kubenswrapper[4913]: I0121 06:53:07.207047 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:08 crc kubenswrapper[4913]: I0121 06:53:08.319701 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:53:08 crc kubenswrapper[4913]: I0121 06:53:08.320090 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.429119 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.498317 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.499612 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.515966 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.527848 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.528134 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652035 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652262 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652280 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652294 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652352 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754044 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754170 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754202 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754224 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754246 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754344 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.760261 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.760692 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.761406 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.772718 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.822502 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.938406 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca"} Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.252813 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:13 crc kubenswrapper[4913]: W0121 06:53:13.255051 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3556b2f0_f34e_47d9_b864_c0a7e8b6989c.slice/crio-a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20 WatchSource:0}: Error finding container a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20: Status 404 returned error can't find the container with id a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20 Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.950620 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerStarted","Data":"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499"} Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.951140 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerStarted","Data":"a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20"} Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.955552 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0"} Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.968871 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerStarted","Data":"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c"} Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.971441 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" exitCode=1 Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.971480 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0"} Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.971506 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.972000 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:14 crc kubenswrapper[4913]: E0121 06:53:14.972338 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.017654 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-2" podStartSLOduration=3.01762066 podStartE2EDuration="3.01762066s" podCreationTimestamp="2026-01-21 06:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:15.009969344 +0000 UTC m=+1084.806329027" watchObservedRunningTime="2026-01-21 06:53:15.01762066 +0000 UTC m=+1084.813980363" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.983583 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" exitCode=1 Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.983702 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca"} Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.984077 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.984466 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.984539 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:15 crc kubenswrapper[4913]: E0121 06:53:15.985016 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:17 crc kubenswrapper[4913]: I0121 06:53:17.004691 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:17 crc kubenswrapper[4913]: I0121 06:53:17.005722 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:17 crc kubenswrapper[4913]: E0121 06:53:17.006376 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:17 crc kubenswrapper[4913]: I0121 06:53:17.822859 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.361052 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.361457 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.361479 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.362423 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.362446 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:18 crc kubenswrapper[4913]: E0121 06:53:18.362857 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.046486 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.526839 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.527334 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" containerID="cri-o://ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" gracePeriod=30 Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.527461 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" containerID="cri-o://172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" gracePeriod=30 Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.068231 4913 generic.go:334] "Generic (PLEG): container finished" podID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" exitCode=0 Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.068285 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerDied","Data":"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c"} Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.688504 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844235 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844344 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844402 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844424 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844754 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.851140 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts" (OuterVolumeSpecName: "scripts") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.853010 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd" (OuterVolumeSpecName: "kube-api-access-vb9jd") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "kube-api-access-vb9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.853807 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.930425 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data" (OuterVolumeSpecName: "config-data") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946262 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946313 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946337 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946356 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946379 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076803 4913 generic.go:334] "Generic (PLEG): container finished" podID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" exitCode=0 Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076848 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerDied","Data":"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499"} Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076877 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerDied","Data":"a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20"} Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076899 4913 scope.go:117] "RemoveContainer" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.077051 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.110449 4913 scope.go:117] "RemoveContainer" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.115722 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.124851 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.131110 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.131346 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" containerID="cri-o://586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" gracePeriod=30 Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.131710 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" containerID="cri-o://dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" gracePeriod=30 Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.153981 4913 scope.go:117] "RemoveContainer" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" Jan 21 06:53:25 crc kubenswrapper[4913]: E0121 06:53:25.154638 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c\": container with ID starting with 172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c not found: ID does not exist" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.154679 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c"} err="failed to get container status \"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c\": rpc error: code = NotFound desc = could not find container \"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c\": container with ID starting with 172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c not found: ID does not exist" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.154705 4913 scope.go:117] "RemoveContainer" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" Jan 21 06:53:25 crc kubenswrapper[4913]: E0121 06:53:25.155201 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499\": container with ID starting with ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499 not found: ID does not exist" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.155235 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499"} err="failed to get container status \"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499\": rpc error: code = NotFound desc = could not find container \"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499\": container with ID starting with ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499 not found: ID does not exist" Jan 21 06:53:26 crc kubenswrapper[4913]: I0121 06:53:26.090460 4913 generic.go:334] "Generic (PLEG): container finished" podID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" exitCode=0 Jan 21 06:53:26 crc kubenswrapper[4913]: I0121 06:53:26.090607 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerDied","Data":"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732"} Jan 21 06:53:26 crc kubenswrapper[4913]: I0121 06:53:26.539192 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" path="/var/lib/kubelet/pods/3556b2f0-f34e-47d9-b864-c0a7e8b6989c/volumes" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.731575 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831649 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831698 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831809 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831863 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831968 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.832164 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.837207 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.838157 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts" (OuterVolumeSpecName: "scripts") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.838860 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj" (OuterVolumeSpecName: "kube-api-access-w9dlj") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "kube-api-access-w9dlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.918819 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data" (OuterVolumeSpecName: "config-data") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.933911 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.934098 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.934201 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.934280 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.126884 4913 generic.go:334] "Generic (PLEG): container finished" podID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" exitCode=0 Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.126949 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerDied","Data":"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d"} Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.127002 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerDied","Data":"b16ed940864ef946c18ba1dbbbda2af9f6fa0f1dd70aecdf23a72e769b81c37b"} Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.127031 4913 scope.go:117] "RemoveContainer" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.127097 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.163755 4913 scope.go:117] "RemoveContainer" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.185449 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.194800 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.207974 4913 scope.go:117] "RemoveContainer" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.209489 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732\": container with ID starting with dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732 not found: ID does not exist" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.209623 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732"} err="failed to get container status \"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732\": rpc error: code = NotFound desc = could not find container \"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732\": container with ID starting with dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732 not found: ID does not exist" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.209665 4913 scope.go:117] "RemoveContainer" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.210420 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d\": container with ID starting with 586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d not found: ID does not exist" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.210481 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d"} err="failed to get container status \"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d\": rpc error: code = NotFound desc = could not find container \"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d\": container with ID starting with 586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d not found: ID does not exist" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.544538 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" path="/var/lib/kubelet/pods/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c/volumes" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912351 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912815 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912844 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912897 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912911 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912935 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912954 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912974 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912987 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913205 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913227 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913244 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913278 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.914481 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.934056 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052786 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052855 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052899 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052935 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053119 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053239 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053321 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053387 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053444 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053529 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053553 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053629 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053745 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053781 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155128 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155248 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155329 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155353 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155380 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155444 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155449 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155495 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155562 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155638 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155666 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155707 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155749 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155780 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155797 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155811 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155848 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155895 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155913 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155953 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.156041 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.156071 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.161843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.163558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.165268 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.185841 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.243646 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.526467 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.526847 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:31 crc kubenswrapper[4913]: E0121 06:53:31.527113 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.717394 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:31 crc kubenswrapper[4913]: W0121 06:53:31.725810 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33e852d1_210d_4846_b7bf_b0a2dba9b6d2.slice/crio-97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071 WatchSource:0}: Error finding container 97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071: Status 404 returned error can't find the container with id 97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071 Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.154301 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerStarted","Data":"8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69"} Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.155755 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerStarted","Data":"880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa"} Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.155829 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerStarted","Data":"97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071"} Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.176827 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-1" podStartSLOduration=2.176809849 podStartE2EDuration="2.176809849s" podCreationTimestamp="2026-01-21 06:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:32.172982777 +0000 UTC m=+1101.969342450" watchObservedRunningTime="2026-01-21 06:53:32.176809849 +0000 UTC m=+1101.973169522" Jan 21 06:53:36 crc kubenswrapper[4913]: I0121 06:53:36.244291 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:36 crc kubenswrapper[4913]: I0121 06:53:36.485298 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.243893 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.245461 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.261786 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360693 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360743 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360772 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360801 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360865 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360922 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360957 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360982 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361031 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361061 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361104 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361127 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361188 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361287 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462813 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462920 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462974 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463000 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462919 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463028 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462976 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463092 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463153 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463114 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463200 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463237 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463269 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463292 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463404 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463487 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463660 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463729 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463865 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463755 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463943 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.464032 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.464112 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.468532 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.469394 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.469528 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.484150 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.565638 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.029069 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.200709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerStarted","Data":"2792866368f04fea46de0be2740781f256cfc0dc091be293aaeeb16fba60fbb5"} Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.319408 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.320101 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:53:39 crc kubenswrapper[4913]: I0121 06:53:39.214716 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerStarted","Data":"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4"} Jan 21 06:53:39 crc kubenswrapper[4913]: I0121 06:53:39.215284 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerStarted","Data":"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4"} Jan 21 06:53:39 crc kubenswrapper[4913]: I0121 06:53:39.243345 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-2" podStartSLOduration=2.243321824 podStartE2EDuration="2.243321824s" podCreationTimestamp="2026-01-21 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:39.243204481 +0000 UTC m=+1109.039564164" watchObservedRunningTime="2026-01-21 06:53:39.243321824 +0000 UTC m=+1109.039681527" Jan 21 06:53:42 crc kubenswrapper[4913]: I0121 06:53:42.566121 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:46 crc kubenswrapper[4913]: I0121 06:53:46.527161 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:46 crc kubenswrapper[4913]: I0121 06:53:46.527477 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:47 crc kubenswrapper[4913]: I0121 06:53:47.296277 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e"} Jan 21 06:53:47 crc kubenswrapper[4913]: I0121 06:53:47.297026 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542"} Jan 21 06:53:47 crc kubenswrapper[4913]: I0121 06:53:47.749495 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.308750 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" exitCode=1 Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.308809 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e"} Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.308884 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.309604 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:48 crc kubenswrapper[4913]: E0121 06:53:48.309852 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.361040 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.419342 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.419651 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" containerID="cri-o://b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" gracePeriod=30 Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.419750 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" containerID="cri-o://e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" gracePeriod=30 Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.320255 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" exitCode=1 Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.320684 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542"} Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.320740 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.321829 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.321908 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:49 crc kubenswrapper[4913]: E0121 06:53:49.322455 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.330054 4913 generic.go:334] "Generic (PLEG): container finished" podID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" exitCode=0 Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.330460 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerDied","Data":"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4"} Jan 21 06:53:50 crc kubenswrapper[4913]: I0121 06:53:50.342780 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:50 crc kubenswrapper[4913]: I0121 06:53:50.342820 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:50 crc kubenswrapper[4913]: E0121 06:53:50.343139 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:51 crc kubenswrapper[4913]: I0121 06:53:51.361655 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:51 crc kubenswrapper[4913]: I0121 06:53:51.362661 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:51 crc kubenswrapper[4913]: I0121 06:53:51.362685 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:51 crc kubenswrapper[4913]: E0121 06:53:51.363063 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:53 crc kubenswrapper[4913]: I0121 06:53:53.361116 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:53 crc kubenswrapper[4913]: I0121 06:53:53.362333 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:53 crc kubenswrapper[4913]: I0121 06:53:53.362351 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:53 crc kubenswrapper[4913]: E0121 06:53:53.362668 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.072407 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251199 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251294 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run" (OuterVolumeSpecName: "run") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251309 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251346 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251376 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251402 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251416 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251432 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251463 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251518 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251555 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251616 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251649 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251559 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251579 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251553 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251649 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys" (OuterVolumeSpecName: "sys") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251776 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251807 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251831 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251861 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251924 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251979 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252029 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev" (OuterVolumeSpecName: "dev") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252343 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252357 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252366 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252376 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252388 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252396 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252404 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252413 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252421 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252429 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.257004 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.257808 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd" (OuterVolumeSpecName: "kube-api-access-cjnxd") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "kube-api-access-cjnxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.263617 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts" (OuterVolumeSpecName: "scripts") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.352085 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data" (OuterVolumeSpecName: "config-data") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355233 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355334 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355373 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355447 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377103 4913 generic.go:334] "Generic (PLEG): container finished" podID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" exitCode=0 Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377158 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerDied","Data":"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4"} Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerDied","Data":"2792866368f04fea46de0be2740781f256cfc0dc091be293aaeeb16fba60fbb5"} Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377178 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377271 4913 scope.go:117] "RemoveContainer" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.405556 4913 scope.go:117] "RemoveContainer" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.418404 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.428185 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.434582 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.434894 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" containerID="cri-o://880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa" gracePeriod=30 Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.435430 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" containerID="cri-o://8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69" gracePeriod=30 Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.446580 4913 scope.go:117] "RemoveContainer" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" Jan 21 06:53:54 crc kubenswrapper[4913]: E0121 06:53:54.447190 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4\": container with ID starting with e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4 not found: ID does not exist" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.447219 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4"} err="failed to get container status \"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4\": rpc error: code = NotFound desc = could not find container \"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4\": container with ID starting with e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4 not found: ID does not exist" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.447258 4913 scope.go:117] "RemoveContainer" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" Jan 21 06:53:54 crc kubenswrapper[4913]: E0121 06:53:54.447625 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4\": container with ID starting with b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4 not found: ID does not exist" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.447641 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4"} err="failed to get container status \"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4\": rpc error: code = NotFound desc = could not find container \"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4\": container with ID starting with b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4 not found: ID does not exist" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.535573 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" path="/var/lib/kubelet/pods/61fafe77-c820-4eff-8892-f1e725d2ec2d/volumes" Jan 21 06:53:56 crc kubenswrapper[4913]: I0121 06:53:56.929225 4913 generic.go:334] "Generic (PLEG): container finished" podID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerID="8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69" exitCode=0 Jan 21 06:53:56 crc kubenswrapper[4913]: I0121 06:53:56.929280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerDied","Data":"8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69"} Jan 21 06:53:58 crc kubenswrapper[4913]: I0121 06:53:58.948866 4913 generic.go:334] "Generic (PLEG): container finished" podID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerID="880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa" exitCode=0 Jan 21 06:53:58 crc kubenswrapper[4913]: I0121 06:53:58.949007 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerDied","Data":"880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa"} Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.710517 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876450 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876509 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876544 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876610 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876649 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876678 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876714 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876767 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876786 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876803 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876820 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876844 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876881 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876923 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883742 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys" (OuterVolumeSpecName: "sys") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883803 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883850 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883881 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883922 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883937 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883944 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883984 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.884004 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev" (OuterVolumeSpecName: "dev") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883996 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run" (OuterVolumeSpecName: "run") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.887389 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk" (OuterVolumeSpecName: "kube-api-access-zw4dk") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "kube-api-access-zw4dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.887854 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.888621 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts" (OuterVolumeSpecName: "scripts") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.965323 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerDied","Data":"97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071"} Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.965466 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.966691 4913 scope.go:117] "RemoveContainer" containerID="8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979688 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979746 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979774 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979798 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979822 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979847 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979874 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979902 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979957 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979982 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.980005 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.980029 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.980085 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.989807 4913 scope.go:117] "RemoveContainer" containerID="880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.002292 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data" (OuterVolumeSpecName: "config-data") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.082684 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.300743 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.304933 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.538319 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" path="/var/lib/kubelet/pods/33e852d1-210d-4846-b7bf-b0a2dba9b6d2/volumes" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.977179 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.977496 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" containerID="cri-o://8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" gracePeriod=30 Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.977611 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" containerID="cri-o://a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" gracePeriod=30 Jan 21 06:54:01 crc kubenswrapper[4913]: I0121 06:54:01.991361 4913 generic.go:334] "Generic (PLEG): container finished" podID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" exitCode=143 Jan 21 06:54:01 crc kubenswrapper[4913]: I0121 06:54:01.991814 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerDied","Data":"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c"} Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.133430 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.90:8776/healthcheck\": read tcp 10.217.0.2:46466->10.217.0.90:8776: read: connection reset by peer" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.560164 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655559 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655700 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655752 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655841 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655883 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655922 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655977 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.656319 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.656488 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs" (OuterVolumeSpecName: "logs") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.676104 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.676219 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts" (OuterVolumeSpecName: "scripts") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.677327 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv" (OuterVolumeSpecName: "kube-api-access-g2nvv") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "kube-api-access-g2nvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.691454 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data" (OuterVolumeSpecName: "config-data") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758573 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758685 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758706 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758723 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758741 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.021831 4913 generic.go:334] "Generic (PLEG): container finished" podID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" exitCode=0 Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.021916 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerDied","Data":"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828"} Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.021980 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.022415 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerDied","Data":"e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1"} Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.022793 4913 scope.go:117] "RemoveContainer" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.055276 4913 scope.go:117] "RemoveContainer" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.065404 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.077233 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.090332 4913 scope.go:117] "RemoveContainer" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" Jan 21 06:54:05 crc kubenswrapper[4913]: E0121 06:54:05.090749 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828\": container with ID starting with a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828 not found: ID does not exist" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.090845 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828"} err="failed to get container status \"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828\": rpc error: code = NotFound desc = could not find container \"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828\": container with ID starting with a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828 not found: ID does not exist" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.090880 4913 scope.go:117] "RemoveContainer" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" Jan 21 06:54:05 crc kubenswrapper[4913]: E0121 06:54:05.091294 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c\": container with ID starting with 8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c not found: ID does not exist" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.091320 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c"} err="failed to get container status \"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c\": rpc error: code = NotFound desc = could not find container \"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c\": container with ID starting with 8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c not found: ID does not exist" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.450947 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451676 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451698 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451718 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451728 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451752 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451763 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451784 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451793 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451807 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451816 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451854 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451866 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452113 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452132 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452150 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452172 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452187 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452203 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.453277 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.463342 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.463976 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.464890 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.471312 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.473368 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.484680 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.492417 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.499938 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.527172 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.527207 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.527495 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.540054 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" path="/var/lib/kubelet/pods/ff633750-3c62-48b5-b977-f1b4f42b9b7e/volumes" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598121 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598158 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598175 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598200 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598320 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598462 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598499 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598528 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598554 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598571 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598659 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598716 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598765 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598829 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598880 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598928 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598989 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.599029 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700300 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700338 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700364 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700389 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700416 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700459 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700477 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700496 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700511 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700525 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700550 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700563 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700580 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700624 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700644 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700663 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700690 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700705 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700845 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.701218 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702236 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702483 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702580 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702666 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.704513 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.705199 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.705423 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.705708 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.706835 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.707276 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.709249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.709343 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.719329 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.719909 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.729693 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.729843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.789913 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.807985 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.824836 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.017532 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.058644 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerStarted","Data":"fc61141adedd9fabf2633556dfc1607679b4a48fb46bdb5f82cce9d946540273"} Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.072496 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:07 crc kubenswrapper[4913]: W0121 06:54:07.076438 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bfe045_07fb_48c6_aa71_356c7934f35a.slice/crio-48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe WatchSource:0}: Error finding container 48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe: Status 404 returned error can't find the container with id 48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.112062 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:07 crc kubenswrapper[4913]: W0121 06:54:07.118496 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode39e587a_7f6c_49d5_a5a3_1fb01ee2e790.slice/crio-9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0 WatchSource:0}: Error finding container 9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0: Status 404 returned error can't find the container with id 9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0 Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.070968 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerStarted","Data":"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.071444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerStarted","Data":"9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.079577 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerStarted","Data":"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.079637 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerStarted","Data":"48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.082094 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerStarted","Data":"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.322993 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323043 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323078 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323639 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323684 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6" gracePeriod=600 Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095027 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6" exitCode=0 Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095101 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095747 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095782 4913 scope.go:117] "RemoveContainer" containerID="e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.103102 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerStarted","Data":"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.103237 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.105863 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerStarted","Data":"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.106036 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.108290 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerStarted","Data":"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.108544 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.146454 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-1" podStartSLOduration=3.146430159 podStartE2EDuration="3.146430159s" podCreationTimestamp="2026-01-21 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:09.143038437 +0000 UTC m=+1138.939398110" watchObservedRunningTime="2026-01-21 06:54:09.146430159 +0000 UTC m=+1138.942789842" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.170433 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-2" podStartSLOduration=3.170408565 podStartE2EDuration="3.170408565s" podCreationTimestamp="2026-01-21 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:09.164968918 +0000 UTC m=+1138.961328671" watchObservedRunningTime="2026-01-21 06:54:09.170408565 +0000 UTC m=+1138.966768258" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.189148 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.189120258 podStartE2EDuration="3.189120258s" podCreationTimestamp="2026-01-21 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:09.180105146 +0000 UTC m=+1138.976464859" watchObservedRunningTime="2026-01-21 06:54:09.189120258 +0000 UTC m=+1138.985479951" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.527004 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.527782 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:54:18 crc kubenswrapper[4913]: E0121 06:54:18.528287 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.649973 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.806747 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.858679 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.840083 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.840737 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" containerID="cri-o://e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.840759 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" containerID="cri-o://71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.847282 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.98:8776/healthcheck\": EOF" Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.850980 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.851277 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" containerID="cri-o://a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.851662 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" containerID="cri-o://88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.863724 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.97:8776/healthcheck\": EOF" Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.218089 4913 generic.go:334] "Generic (PLEG): container finished" podID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" exitCode=143 Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.218159 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerDied","Data":"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a"} Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.219627 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" exitCode=143 Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.219651 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerDied","Data":"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d"} Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.343193 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.98:8776/healthcheck\": read tcp 10.217.0.2:36410->10.217.0.98:8776: read: connection reset by peer" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.351744 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.97:8776/healthcheck\": read tcp 10.217.0.2:37910->10.217.0.97:8776: read: connection reset by peer" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.630137 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.702525 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792440 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792498 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792529 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792570 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792651 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792669 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792727 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792746 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792766 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792786 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792808 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792826 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.794569 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798153 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9" (OuterVolumeSpecName: "kube-api-access-4mrl9") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "kube-api-access-4mrl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798310 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798569 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs" (OuterVolumeSpecName: "logs") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798622 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts" (OuterVolumeSpecName: "scripts") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798723 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.799042 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs" (OuterVolumeSpecName: "logs") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.804293 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh" (OuterVolumeSpecName: "kube-api-access-47hrh") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "kube-api-access-47hrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.804315 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.805741 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts" (OuterVolumeSpecName: "scripts") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.832448 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data" (OuterVolumeSpecName: "config-data") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.842073 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data" (OuterVolumeSpecName: "config-data") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894233 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894271 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894283 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894293 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894304 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894316 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894326 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894339 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894348 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894356 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894363 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894370 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.266936 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" exitCode=0 Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267047 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerDied","Data":"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267099 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerDied","Data":"48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267126 4913 scope.go:117] "RemoveContainer" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267314 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272084 4913 generic.go:334] "Generic (PLEG): container finished" podID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" exitCode=0 Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272165 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerDied","Data":"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerDied","Data":"9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272311 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.289278 4913 scope.go:117] "RemoveContainer" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.312630 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.325168 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.336295 4913 scope.go:117] "RemoveContainer" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.336815 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff\": container with ID starting with 88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff not found: ID does not exist" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.336872 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff"} err="failed to get container status \"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff\": rpc error: code = NotFound desc = could not find container \"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff\": container with ID starting with 88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff not found: ID does not exist" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.336902 4913 scope.go:117] "RemoveContainer" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.337254 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d\": container with ID starting with a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d not found: ID does not exist" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.337283 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d"} err="failed to get container status \"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d\": rpc error: code = NotFound desc = could not find container \"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d\": container with ID starting with a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d not found: ID does not exist" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.337298 4913 scope.go:117] "RemoveContainer" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.347686 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.352996 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.356744 4913 scope.go:117] "RemoveContainer" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.406028 4913 scope.go:117] "RemoveContainer" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.406644 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3\": container with ID starting with 71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3 not found: ID does not exist" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.406696 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3"} err="failed to get container status \"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3\": rpc error: code = NotFound desc = could not find container \"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3\": container with ID starting with 71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3 not found: ID does not exist" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.406730 4913 scope.go:117] "RemoveContainer" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.407091 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a\": container with ID starting with e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a not found: ID does not exist" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.407120 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a"} err="failed to get container status \"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a\": rpc error: code = NotFound desc = could not find container \"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a\": container with ID starting with e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a not found: ID does not exist" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.240950 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.246644 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.279504 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.281205 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" containerID="cri-o://54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.281280 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" containerID="cri-o://7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.319151 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.347986 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.348363 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" containerID="cri-o://001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.348307 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" containerID="cri-o://503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.399677 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.399963 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" containerID="cri-o://7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.400366 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" containerID="cri-o://7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440145 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440449 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440463 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440471 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440477 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440492 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440498 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440514 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440520 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440650 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440660 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440675 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440687 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.441184 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.459554 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.524372 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.524450 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.565555 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" path="/var/lib/kubelet/pods/431d7c8b-5c95-4534-8cba-dd55885fc5cb/volumes" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.566273 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" path="/var/lib/kubelet/pods/a8bfe045-07fb-48c6-aa71-356c7934f35a/volumes" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.566854 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" path="/var/lib/kubelet/pods/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790/volumes" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.627436 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.627565 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.628902 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.649490 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.768052 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.829186 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.933881 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.935441 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run" (OuterVolumeSpecName: "run") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937757 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937805 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937867 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937912 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937957 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938003 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938033 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938059 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938107 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938163 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938200 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938222 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938249 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938306 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939089 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939144 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939391 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939389 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939653 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940333 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys" (OuterVolumeSpecName: "sys") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940381 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev" (OuterVolumeSpecName: "dev") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940410 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940654 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940675 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940689 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940701 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940712 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940722 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940732 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940743 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940754 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940766 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.945237 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.945270 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2" (OuterVolumeSpecName: "kube-api-access-92vr2") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "kube-api-access-92vr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.946975 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts" (OuterVolumeSpecName: "scripts") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.014371 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data" (OuterVolumeSpecName: "config-data") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043283 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043341 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043355 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043375 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.052781 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:27 crc kubenswrapper[4913]: W0121 06:54:27.056885 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a6b93f_5562_4cac_8cc0_5aefdf18537d.slice/crio-0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994 WatchSource:0}: Error finding container 0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994: Status 404 returned error can't find the container with id 0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.290046 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerStarted","Data":"e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.290532 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerStarted","Data":"0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.292302 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.292336 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.292423 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.294085 4913 generic.go:334] "Generic (PLEG): container finished" podID="b256a85e-47c9-4195-9732-d58250fd3f42" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" exitCode=0 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.294137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerDied","Data":"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.296361 4913 generic.go:334] "Generic (PLEG): container finished" podID="7667e048-a702-4a50-8e72-35d001e6a310" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" exitCode=0 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.296415 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerDied","Data":"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.298438 4913 generic.go:334] "Generic (PLEG): container finished" podID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" exitCode=143 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.298472 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerDied","Data":"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.312389 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" podStartSLOduration=1.3123552809999999 podStartE2EDuration="1.312355281s" podCreationTimestamp="2026-01-21 06:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:27.310434548 +0000 UTC m=+1157.106794221" watchObservedRunningTime="2026-01-21 06:54:27.312355281 +0000 UTC m=+1157.108714954" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.336153 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.336480 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.352700 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:28 crc kubenswrapper[4913]: I0121 06:54:28.307425 4913 generic.go:334] "Generic (PLEG): container finished" podID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerID="e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a" exitCode=0 Jan 21 06:54:28 crc kubenswrapper[4913]: I0121 06:54:28.307507 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerDied","Data":"e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a"} Jan 21 06:54:28 crc kubenswrapper[4913]: I0121 06:54:28.537186 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" path="/var/lib/kubelet/pods/5c8afc51-5054-46d8-a16d-e07541ff4af7/volumes" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.742532 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.845669 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.96:8776/healthcheck\": read tcp 10.217.0.2:47266->10.217.0.96:8776: read: connection reset by peer" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.893213 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.893326 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.894167 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19a6b93f-5562-4cac-8cc0-5aefdf18537d" (UID: "19a6b93f-5562-4cac-8cc0-5aefdf18537d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.899875 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82" (OuterVolumeSpecName: "kube-api-access-zwt82") pod "19a6b93f-5562-4cac-8cc0-5aefdf18537d" (UID: "19a6b93f-5562-4cac-8cc0-5aefdf18537d"). InnerVolumeSpecName "kube-api-access-zwt82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.994933 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.994975 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.014088 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096121 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096228 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096314 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096412 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096513 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096522 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096934 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.100787 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts" (OuterVolumeSpecName: "scripts") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.101730 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.101849 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd" (OuterVolumeSpecName: "kube-api-access-4sjpd") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "kube-api-access-4sjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.157628 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data" (OuterVolumeSpecName: "config-data") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.180433 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200117 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200143 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200151 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200160 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301448 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301568 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301641 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301673 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301755 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301785 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301786 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.302154 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs" (OuterVolumeSpecName: "logs") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.302384 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.302416 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.304824 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts" (OuterVolumeSpecName: "scripts") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.305664 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k" (OuterVolumeSpecName: "kube-api-access-2lx2k") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "kube-api-access-2lx2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.306354 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326553 4913 generic.go:334] "Generic (PLEG): container finished" podID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" exitCode=0 Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326691 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerDied","Data":"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326706 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326726 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerDied","Data":"fc61141adedd9fabf2633556dfc1607679b4a48fb46bdb5f82cce9d946540273"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326749 4913 scope.go:117] "RemoveContainer" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.331014 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.331011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerDied","Data":"0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.331069 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334192 4913 generic.go:334] "Generic (PLEG): container finished" podID="7667e048-a702-4a50-8e72-35d001e6a310" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" exitCode=0 Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334250 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerDied","Data":"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334287 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerDied","Data":"f981181d61a91c8c4ac8291fd1a2e334c01d5ff76df7a42189afca1df38c5352"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334359 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.348288 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data" (OuterVolumeSpecName: "config-data") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.372222 4913 scope.go:117] "RemoveContainer" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.389032 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.395068 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403350 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403373 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403382 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403391 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.423070 4913 scope.go:117] "RemoveContainer" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.423682 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3\": container with ID starting with 7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3 not found: ID does not exist" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.423719 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3"} err="failed to get container status \"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3\": rpc error: code = NotFound desc = could not find container \"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3\": container with ID starting with 7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3 not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.423745 4913 scope.go:117] "RemoveContainer" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.424148 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea\": container with ID starting with 7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea not found: ID does not exist" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.424228 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea"} err="failed to get container status \"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea\": rpc error: code = NotFound desc = could not find container \"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea\": container with ID starting with 7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.424297 4913 scope.go:117] "RemoveContainer" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.452457 4913 scope.go:117] "RemoveContainer" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.477721 4913 scope.go:117] "RemoveContainer" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.478854 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729\": container with ID starting with 7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729 not found: ID does not exist" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.478915 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729"} err="failed to get container status \"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729\": rpc error: code = NotFound desc = could not find container \"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729\": container with ID starting with 7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729 not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.478947 4913 scope.go:117] "RemoveContainer" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.479585 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482\": container with ID starting with 54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482 not found: ID does not exist" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.479633 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482"} err="failed to get container status \"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482\": rpc error: code = NotFound desc = could not find container \"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482\": container with ID starting with 54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482 not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.538011 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7667e048-a702-4a50-8e72-35d001e6a310" path="/var/lib/kubelet/pods/7667e048-a702-4a50-8e72-35d001e6a310/volumes" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.650181 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.655866 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.675647 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808073 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808116 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808149 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808177 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808201 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808231 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808243 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808259 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808272 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808285 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808347 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808394 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808481 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808960 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808987 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809033 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run" (OuterVolumeSpecName: "run") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809046 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809071 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809091 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev" (OuterVolumeSpecName: "dev") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809111 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809199 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.810449 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys" (OuterVolumeSpecName: "sys") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.813170 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts" (OuterVolumeSpecName: "scripts") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.813170 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx" (OuterVolumeSpecName: "kube-api-access-n78xx") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "kube-api-access-n78xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.813878 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.896152 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data" (OuterVolumeSpecName: "config-data") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909422 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909449 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909461 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909472 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909484 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909494 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909502 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909511 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909520 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909530 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909539 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909549 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909558 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909568 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345750 4913 generic.go:334] "Generic (PLEG): container finished" podID="b256a85e-47c9-4195-9732-d58250fd3f42" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" exitCode=0 Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345842 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerDied","Data":"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282"} Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345882 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerDied","Data":"8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf"} Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345912 4913 scope.go:117] "RemoveContainer" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.346098 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.380179 4913 scope.go:117] "RemoveContainer" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.392318 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.399977 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.421739 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.427071 4913 scope.go:117] "RemoveContainer" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.427583 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963\": container with ID starting with 001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963 not found: ID does not exist" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.427701 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963"} err="failed to get container status \"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963\": rpc error: code = NotFound desc = could not find container \"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963\": container with ID starting with 001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963 not found: ID does not exist" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.427742 4913 scope.go:117] "RemoveContainer" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.428123 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282\": container with ID starting with 503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282 not found: ID does not exist" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.428161 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282"} err="failed to get container status \"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282\": rpc error: code = NotFound desc = could not find container \"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282\": container with ID starting with 503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282 not found: ID does not exist" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.428309 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.441583 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.449845 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.455935 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.462684 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.520707 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.520991 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521005 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521015 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521021 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521032 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521039 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521047 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521053 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521063 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521068 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521076 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521083 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521091 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521097 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521105 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521111 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521124 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521131 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521139 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521147 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521156 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerName="mariadb-account-delete" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521163 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerName="mariadb-account-delete" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521173 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521178 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521252 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521259 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521365 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521373 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521382 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521388 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerName="mariadb-account-delete" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521396 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521404 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521411 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521419 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521425 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521435 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521443 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521452 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521460 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521909 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.528168 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.620783 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.620890 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.622665 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.623005 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623068 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623263 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623333 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623861 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.625509 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.636690 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722080 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722209 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722309 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722404 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.723772 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.743889 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.823844 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.823948 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.824686 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.834056 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.848322 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.936887 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.256408 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.373262 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" event={"ID":"ed1e527b-217a-46b6-a907-0a6b589f7c4c","Type":"ContainerStarted","Data":"15597597f1dee6d759b1be14f9ed0008d47b913d4ac8cedc150dd6f64a2d5caa"} Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.394681 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:54:32 crc kubenswrapper[4913]: W0121 06:54:32.408578 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef9f85f_61e3_44b8_8974_9ec0a1e4e359.slice/crio-d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7 WatchSource:0}: Error finding container d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7: Status 404 returned error can't find the container with id d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7 Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.536745 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" path="/var/lib/kubelet/pods/19a6b93f-5562-4cac-8cc0-5aefdf18537d/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.537480 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e5eed1-ff67-483b-808d-466413987e09" path="/var/lib/kubelet/pods/84e5eed1-ff67-483b-808d-466413987e09/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.538002 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" path="/var/lib/kubelet/pods/b256a85e-47c9-4195-9732-d58250fd3f42/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.539137 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" path="/var/lib/kubelet/pods/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.539997 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" path="/var/lib/kubelet/pods/de97e815-d3a3-4a3d-81e2-6054f65b82f0/volumes" Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.399965 4913 generic.go:334] "Generic (PLEG): container finished" podID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerID="f6cac025b126b4c0c411e321cbd1813b091d8ec15b54ab9a538e93d26406b363" exitCode=0 Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.401949 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" event={"ID":"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359","Type":"ContainerDied","Data":"f6cac025b126b4c0c411e321cbd1813b091d8ec15b54ab9a538e93d26406b363"} Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.402022 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" event={"ID":"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359","Type":"ContainerStarted","Data":"d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7"} Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.405011 4913 generic.go:334] "Generic (PLEG): container finished" podID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerID="9eca8ae0460adba99832950743728508ef374a3fcd006d3227af45af81c4c272" exitCode=0 Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.405047 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" event={"ID":"ed1e527b-217a-46b6-a907-0a6b589f7c4c","Type":"ContainerDied","Data":"9eca8ae0460adba99832950743728508ef374a3fcd006d3227af45af81c4c272"} Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.832993 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.837158 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968022 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968151 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968172 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968204 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968916 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed1e527b-217a-46b6-a907-0a6b589f7c4c" (UID: "ed1e527b-217a-46b6-a907-0a6b589f7c4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968942 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" (UID: "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.973213 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42" (OuterVolumeSpecName: "kube-api-access-jpt42") pod "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" (UID: "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359"). InnerVolumeSpecName "kube-api-access-jpt42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.973269 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5" (OuterVolumeSpecName: "kube-api-access-hqvz5") pod "ed1e527b-217a-46b6-a907-0a6b589f7c4c" (UID: "ed1e527b-217a-46b6-a907-0a6b589f7c4c"). InnerVolumeSpecName "kube-api-access-hqvz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.069564 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.069884 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.069977 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.070069 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.422113 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" event={"ID":"ed1e527b-217a-46b6-a907-0a6b589f7c4c","Type":"ContainerDied","Data":"15597597f1dee6d759b1be14f9ed0008d47b913d4ac8cedc150dd6f64a2d5caa"} Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.422170 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15597597f1dee6d759b1be14f9ed0008d47b913d4ac8cedc150dd6f64a2d5caa" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.422242 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.426254 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" event={"ID":"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359","Type":"ContainerDied","Data":"d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7"} Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.426292 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.426572 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.861198 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:36 crc kubenswrapper[4913]: E0121 06:54:36.862155 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerName="mariadb-database-create" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862195 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerName="mariadb-database-create" Jan 21 06:54:36 crc kubenswrapper[4913]: E0121 06:54:36.862227 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerName="mariadb-account-create-update" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862244 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerName="mariadb-account-create-update" Jan 21 06:54:36 crc kubenswrapper[4913]: E0121 06:54:36.862264 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862281 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862526 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerName="mariadb-account-create-update" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862560 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerName="mariadb-database-create" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.863722 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.866236 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-bdjrj" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.867055 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.867461 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.868235 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.882959 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999362 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999487 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999516 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999698 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:36.999933 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:36.999997 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.100924 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.100978 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101045 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101074 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101130 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101260 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101296 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.108229 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.110686 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.112636 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.112785 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.143486 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.191259 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.696308 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:37 crc kubenswrapper[4913]: W0121 06:54:37.718745 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5f0576_b10d_48d0_9017_4e24b85a1968.slice/crio-72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d WatchSource:0}: Error finding container 72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d: Status 404 returned error can't find the container with id 72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d Jan 21 06:54:38 crc kubenswrapper[4913]: I0121 06:54:38.456555 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerStarted","Data":"2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1"} Jan 21 06:54:38 crc kubenswrapper[4913]: I0121 06:54:38.457013 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerStarted","Data":"72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d"} Jan 21 06:54:40 crc kubenswrapper[4913]: I0121 06:54:40.476226 4913 generic.go:334] "Generic (PLEG): container finished" podID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerID="2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1" exitCode=0 Jan 21 06:54:40 crc kubenswrapper[4913]: I0121 06:54:40.476317 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerDied","Data":"2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1"} Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.790968 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879355 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879393 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879470 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879570 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879777 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879851 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.883212 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.887792 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts" (OuterVolumeSpecName: "scripts") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.887922 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.887820 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg" (OuterVolumeSpecName: "kube-api-access-nngsg") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "kube-api-access-nngsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.917820 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.926443 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data" (OuterVolumeSpecName: "config-data") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984431 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984470 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984485 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984499 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984510 4913 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.502066 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerDied","Data":"72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d"} Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.502124 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.502129 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.793245 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: E0121 06:54:42.793468 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerName="cinder-db-sync" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.793480 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerName="cinder-db-sync" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.793624 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerName="cinder-db-sync" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.794216 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.796977 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797205 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797329 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-bdjrj" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797417 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797515 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.810679 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.811694 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.814061 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.820831 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.873805 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.875061 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.879888 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.894970 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.898871 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.899851 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.899980 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900091 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900254 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900387 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900499 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900657 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900798 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900913 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901018 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901141 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901242 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901359 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901467 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901571 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901787 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901942 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.902053 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.902182 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.902285 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.910687 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.962507 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.967040 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.969753 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.969851 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-public-svc" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.969988 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-internal-svc" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.977651 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003162 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003205 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003228 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003251 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003271 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003286 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003311 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003332 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003348 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003362 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003380 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003401 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003415 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003431 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003449 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003475 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003493 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003511 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003526 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003544 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003563 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003579 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003610 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003630 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003645 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003661 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003680 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003694 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003726 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003741 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003758 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003771 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003789 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003807 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003824 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003842 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003911 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.004559 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.004697 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005173 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005329 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005582 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005627 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005645 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005666 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005680 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005963 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.008194 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.009148 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.009192 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.020705 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.022163 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.025524 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.026050 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.026575 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.029087 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.029612 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104783 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104824 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104850 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104882 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104899 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104925 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104962 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104985 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105009 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105045 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105065 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105086 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105111 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105135 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105158 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105172 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105186 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105203 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105216 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105234 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105254 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105270 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105285 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105307 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105330 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105349 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105525 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105816 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105860 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105916 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105951 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105993 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.106099 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.106508 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.107649 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.107957 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.109963 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.120491 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.123212 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.165493 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.181270 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.200943 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206138 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206181 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206201 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206226 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206251 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206275 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206294 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206316 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206330 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206998 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.207556 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.210135 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.210232 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.210382 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.211944 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.212235 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.214060 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.227396 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.289431 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.435119 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: W0121 06:54:43.493372 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea9195d_d908_4239_a57b_6783d75b959c.slice/crio-35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f WatchSource:0}: Error finding container 35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f: Status 404 returned error can't find the container with id 35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.498342 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.531836 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerStarted","Data":"fb558be5bdd8427975385c1ca42849d170194d638784eeb487c234d00654676d"} Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.537009 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerStarted","Data":"35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f"} Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.570096 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: W0121 06:54:43.571734 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9b3c88_2566_48cb_8f74_d1976b0e6bd1.slice/crio-ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8 WatchSource:0}: Error finding container ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8: Status 404 returned error can't find the container with id ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8 Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.800415 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: W0121 06:54:43.820272 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2458ee8d_8802_4047_a9fe_d077f2d2450d.slice/crio-449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb WatchSource:0}: Error finding container 449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb: Status 404 returned error can't find the container with id 449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.556109 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerStarted","Data":"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.556730 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerStarted","Data":"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.561424 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerStarted","Data":"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.561471 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerStarted","Data":"449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.563860 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerStarted","Data":"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.563915 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerStarted","Data":"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.566031 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.566066 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.566080 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.583411 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=2.583388707 podStartE2EDuration="2.583388707s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:44.57722078 +0000 UTC m=+1174.373580493" watchObservedRunningTime="2026-01-21 06:54:44.583388707 +0000 UTC m=+1174.379748400" Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.619872 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.6198504209999998 podStartE2EDuration="2.619850421s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:44.611906986 +0000 UTC m=+1174.408266689" watchObservedRunningTime="2026-01-21 06:54:44.619850421 +0000 UTC m=+1174.416210094" Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.642667 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.642646727 podStartE2EDuration="2.642646727s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:44.635228767 +0000 UTC m=+1174.431588460" watchObservedRunningTime="2026-01-21 06:54:44.642646727 +0000 UTC m=+1174.439006410" Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.575401 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerStarted","Data":"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616"} Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.575934 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.577419 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2" exitCode=1 Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.577472 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2"} Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.577713 4913 scope.go:117] "RemoveContainer" containerID="c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2" Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.608512 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.608484712 podStartE2EDuration="3.608484712s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:45.601955315 +0000 UTC m=+1175.398315008" watchObservedRunningTime="2026-01-21 06:54:45.608484712 +0000 UTC m=+1175.404844425" Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.588311 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81" exitCode=1 Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.588371 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81"} Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.590452 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063"} Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.589003 4913 scope.go:117] "RemoveContainer" containerID="d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81" Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600355 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" exitCode=1 Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600435 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063"} Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600689 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c"} Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600714 4913 scope.go:117] "RemoveContainer" containerID="c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2" Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.601151 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:47 crc kubenswrapper[4913]: E0121 06:54:47.601467 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.166055 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.181788 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.201936 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.373900 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.417521 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.609831 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" exitCode=1 Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.609909 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c"} Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.610309 4913 scope.go:117] "RemoveContainer" containerID="d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.610709 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.610767 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:48 crc kubenswrapper[4913]: E0121 06:54:48.611191 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:49 crc kubenswrapper[4913]: I0121 06:54:49.201858 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:49 crc kubenswrapper[4913]: I0121 06:54:49.624945 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:49 crc kubenswrapper[4913]: I0121 06:54:49.625003 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:49 crc kubenswrapper[4913]: E0121 06:54:49.626487 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:50 crc kubenswrapper[4913]: I0121 06:54:50.637247 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:50 crc kubenswrapper[4913]: I0121 06:54:50.637302 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:50 crc kubenswrapper[4913]: E0121 06:54:50.637741 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:53 crc kubenswrapper[4913]: I0121 06:54:53.201852 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:53 crc kubenswrapper[4913]: I0121 06:54:53.203500 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:53 crc kubenswrapper[4913]: I0121 06:54:53.203526 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:53 crc kubenswrapper[4913]: E0121 06:54:53.204061 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:55 crc kubenswrapper[4913]: I0121 06:54:55.338675 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.323336 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.335172 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.347116 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.347620 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" containerID="cri-o://fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.347973 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" containerID="cri-o://c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.356945 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.357236 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" containerID="cri-o://ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.357387 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" containerID="cri-o://858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.417207 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.425903 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.426645 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.433694 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.434179 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" containerID="cri-o://93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.434292 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" containerID="cri-o://f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.439854 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.106:8776/healthcheck\": EOF" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.440655 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.535509 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" path="/var/lib/kubelet/pods/2a5f0576-b10d-48d0-9017-4e24b85a1968/volumes" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.608710 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.608763 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.688785 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8"} Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.688860 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.691920 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerDied","Data":"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2"} Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.691864 4913 generic.go:334] "Generic (PLEG): container finished" podID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" exitCode=143 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.710432 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.710484 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.711346 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.734582 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.734330 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.739672 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912770 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912828 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912852 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912884 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912907 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912931 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912971 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912988 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913006 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913028 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913048 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913083 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913107 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913121 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913420 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913758 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913829 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run" (OuterVolumeSpecName: "run") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913852 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913967 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913998 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914015 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev" (OuterVolumeSpecName: "dev") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914711 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys" (OuterVolumeSpecName: "sys") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914784 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914812 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.918687 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts" (OuterVolumeSpecName: "scripts") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.918766 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km" (OuterVolumeSpecName: "kube-api-access-2p2km") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "kube-api-access-2p2km". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.924900 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.953374 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.988683 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data" (OuterVolumeSpecName: "config-data") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014797 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014836 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014849 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014863 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014876 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014887 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014897 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014908 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014919 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014930 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014943 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014953 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014968 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014979 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014990 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.208432 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:54:57 crc kubenswrapper[4913]: W0121 06:54:57.213571 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod583f045f_efa0_4df8_8b2f_b9699740fc92.slice/crio-b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699 WatchSource:0}: Error finding container b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699: Status 404 returned error can't find the container with id b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.700253 4913 generic.go:334] "Generic (PLEG): container finished" podID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" exitCode=0 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.700307 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerDied","Data":"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.702213 4913 generic.go:334] "Generic (PLEG): container finished" podID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerID="432a97f97a1e748db15a8c859dcaa7de8838a131f61c83acbb060114eb9ecddf" exitCode=0 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.702280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" event={"ID":"583f045f-efa0-4df8-8b2f-b9699740fc92","Type":"ContainerDied","Data":"432a97f97a1e748db15a8c859dcaa7de8838a131f61c83acbb060114eb9ecddf"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.702314 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" event={"ID":"583f045f-efa0-4df8-8b2f-b9699740fc92","Type":"ContainerStarted","Data":"b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.704304 4913 generic.go:334] "Generic (PLEG): container finished" podID="bea9195d-d908-4239-a57b-6783d75b959c" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" exitCode=0 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.704365 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerDied","Data":"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.704392 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.751732 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.756777 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:58 crc kubenswrapper[4913]: I0121 06:54:58.536002 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" path="/var/lib/kubelet/pods/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1/volumes" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.059916 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.242254 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"583f045f-efa0-4df8-8b2f-b9699740fc92\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.242385 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"583f045f-efa0-4df8-8b2f-b9699740fc92\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.243136 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "583f045f-efa0-4df8-8b2f-b9699740fc92" (UID: "583f045f-efa0-4df8-8b2f-b9699740fc92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.249376 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n" (OuterVolumeSpecName: "kube-api-access-2tv2n") pod "583f045f-efa0-4df8-8b2f-b9699740fc92" (UID: "583f045f-efa0-4df8-8b2f-b9699740fc92"). InnerVolumeSpecName "kube-api-access-2tv2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.344322 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.344356 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.725685 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" event={"ID":"583f045f-efa0-4df8-8b2f-b9699740fc92","Type":"ContainerDied","Data":"b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699"} Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.725724 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.725764 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.645829 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.655474 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.736649 4913 generic.go:334] "Generic (PLEG): container finished" podID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" exitCode=0 Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.736866 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.737103 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerDied","Data":"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.737260 4913 scope.go:117] "RemoveContainer" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.737822 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerDied","Data":"fb558be5bdd8427975385c1ca42849d170194d638784eeb487c234d00654676d"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742194 4913 generic.go:334] "Generic (PLEG): container finished" podID="bea9195d-d908-4239-a57b-6783d75b959c" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" exitCode=0 Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742252 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerDied","Data":"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742291 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerDied","Data":"35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742400 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.762945 4913 scope.go:117] "RemoveContainer" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776785 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776838 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776892 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776901 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run" (OuterVolumeSpecName: "run") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776934 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776972 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777001 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777035 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777058 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777084 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777119 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777186 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777209 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777230 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777264 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777296 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777364 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777394 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777425 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777450 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777475 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777854 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776970 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev" (OuterVolumeSpecName: "dev") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777002 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778108 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778188 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778294 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778576 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys" (OuterVolumeSpecName: "sys") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777997 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778701 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778893 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778906 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783115 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw" (OuterVolumeSpecName: "kube-api-access-lndcw") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "kube-api-access-lndcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783317 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783323 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p" (OuterVolumeSpecName: "kube-api-access-lv65p") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "kube-api-access-lv65p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783500 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts" (OuterVolumeSpecName: "scripts") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783753 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.784868 4913 scope.go:117] "RemoveContainer" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.785008 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts" (OuterVolumeSpecName: "scripts") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.785337 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67\": container with ID starting with 858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67 not found: ID does not exist" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.785364 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67"} err="failed to get container status \"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67\": rpc error: code = NotFound desc = could not find container \"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67\": container with ID starting with 858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67 not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.785382 4913 scope.go:117] "RemoveContainer" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.785969 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e\": container with ID starting with ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e not found: ID does not exist" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.786004 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e"} err="failed to get container status \"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e\": rpc error: code = NotFound desc = could not find container \"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e\": container with ID starting with ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.786025 4913 scope.go:117] "RemoveContainer" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.807080 4913 scope.go:117] "RemoveContainer" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.825366 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.835517 4913 scope.go:117] "RemoveContainer" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.837751 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7\": container with ID starting with fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7 not found: ID does not exist" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.837820 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7"} err="failed to get container status \"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7\": rpc error: code = NotFound desc = could not find container \"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7\": container with ID starting with fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7 not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.837859 4913 scope.go:117] "RemoveContainer" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.838371 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b\": container with ID starting with c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b not found: ID does not exist" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.838443 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b"} err="failed to get container status \"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b\": rpc error: code = NotFound desc = could not find container \"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b\": container with ID starting with c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.841898 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.845652 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.106:8776/healthcheck\": read tcp 10.217.0.2:49308->10.217.0.106:8776: read: connection reset by peer" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.852756 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data" (OuterVolumeSpecName: "config-data") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.856199 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data" (OuterVolumeSpecName: "config-data") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879234 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879257 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879266 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879277 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879286 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879293 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879301 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879309 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879317 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879324 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879332 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879340 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879348 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879356 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879363 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879371 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879378 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879386 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879417 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879425 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.102749 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.111797 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.121974 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.134223 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.372402 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.438737 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.446511 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.455835 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.462237 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.466745 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.478418 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.495970 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496013 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496044 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496072 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496112 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496144 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496162 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496217 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496235 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496559 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.497359 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs" (OuterVolumeSpecName: "logs") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.500215 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts" (OuterVolumeSpecName: "scripts") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.500929 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf" (OuterVolumeSpecName: "kube-api-access-m5bkf") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "kube-api-access-m5bkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.501129 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.513962 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.531900 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.542721 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data" (OuterVolumeSpecName: "config-data") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.545106 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597293 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597436 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597516 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597582 4913 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597645 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597691 4913 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.598154 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.598179 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.598189 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751649 4913 generic.go:334] "Generic (PLEG): container finished" podID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" exitCode=0 Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751751 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751774 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerDied","Data":"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616"} Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751844 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerDied","Data":"449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb"} Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751874 4913 scope.go:117] "RemoveContainer" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.786020 4913 scope.go:117] "RemoveContainer" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.800776 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.810414 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.815574 4913 scope.go:117] "RemoveContainer" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" Jan 21 06:55:01 crc kubenswrapper[4913]: E0121 06:55:01.816417 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616\": container with ID starting with f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616 not found: ID does not exist" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.816467 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616"} err="failed to get container status \"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616\": rpc error: code = NotFound desc = could not find container \"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616\": container with ID starting with f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616 not found: ID does not exist" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.816506 4913 scope.go:117] "RemoveContainer" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" Jan 21 06:55:01 crc kubenswrapper[4913]: E0121 06:55:01.817245 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2\": container with ID starting with 93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2 not found: ID does not exist" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.817326 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2"} err="failed to get container status \"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2\": rpc error: code = NotFound desc = could not find container \"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2\": container with ID starting with 93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2 not found: ID does not exist" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.540831 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" path="/var/lib/kubelet/pods/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.542653 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" path="/var/lib/kubelet/pods/2458ee8d-8802-4047-a9fe-d077f2d2450d/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.544108 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" path="/var/lib/kubelet/pods/583f045f-efa0-4df8-8b2f-b9699740fc92/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.546574 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" path="/var/lib/kubelet/pods/a4203da3-d347-42bb-8e9b-6bdbf250c4eb/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.548167 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea9195d-d908-4239-a57b-6783d75b959c" path="/var/lib/kubelet/pods/bea9195d-d908-4239-a57b-6783d75b959c/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.549646 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" path="/var/lib/kubelet/pods/ed1e527b-217a-46b6-a907-0a6b589f7c4c/volumes" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.437706 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.445317 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.457299 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.462652 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478493 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478802 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478819 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478829 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478836 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478848 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478854 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478864 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478870 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478879 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478886 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478896 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478902 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478912 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478917 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478926 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478931 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478940 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerName="mariadb-account-delete" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478945 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerName="mariadb-account-delete" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478960 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478966 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479065 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479078 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479086 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479097 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479104 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479112 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479119 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479125 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479134 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerName="mariadb-account-delete" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479141 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479148 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479635 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.482495 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.524745 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.525029 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" containerID="cri-o://d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588" gracePeriod=30 Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.531495 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.531565 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.632560 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.632662 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.633333 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.666977 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.795512 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.186289 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.221082 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.228557 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.229010 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.229030 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.229805 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.233450 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.238218 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.244048 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.253222 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.260141 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.262053 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nltwk operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" podUID="91b33253-3e4a-44e4-9354-e293f4758d78" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.268957 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.287760 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.342666 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.342750 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.399825 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-2" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" containerID="cri-o://b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" gracePeriod=30 Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.444366 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.444523 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.444685 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.444781 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:04.944759329 +0000 UTC m=+1194.741119002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.448845 4913 projected.go:194] Error preparing data for projected volume kube-api-access-nltwk for pod cinder-kuttl-tests/root-account-create-update-f9cwc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.448913 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:04.948896611 +0000 UTC m=+1194.745256284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nltwk" (UniqueName: "kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.534261 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" path="/var/lib/kubelet/pods/09733cef-ac9b-4a13-92a5-4b416079180f/volumes" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.535188 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" path="/var/lib/kubelet/pods/345b0465-d6ca-45e5-bd9d-47a6adacb366/volumes" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.535896 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" path="/var/lib/kubelet/pods/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56/volumes" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790147 4913 generic.go:334] "Generic (PLEG): container finished" podID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerID="5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2" exitCode=1 Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790234 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790258 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerDied","Data":"5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2"} Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790341 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerStarted","Data":"337073b67ec506202df85dc5fd188199394795c2d0bc028de5810b4d603092f0"} Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790787 4913 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" secret="" err="secret \"galera-openstack-dockercfg-6gtwj\" not found" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790845 4913 scope.go:117] "RemoveContainer" containerID="5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.807499 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.849925 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.849991 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.349977633 +0000 UTC m=+1195.146337306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.915470 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.915970 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/memcached-0" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" containerID="cri-o://9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" gracePeriod=30 Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.952381 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.953019 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.953330 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.953393 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.953374816 +0000 UTC m=+1195.749734499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.967218 4913 projected.go:194] Error preparing data for projected volume kube-api-access-nltwk for pod cinder-kuttl-tests/root-account-create-update-f9cwc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.967349 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.967316702 +0000 UTC m=+1195.763676415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nltwk" (UniqueName: "kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.318655 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.359769 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.359858 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:06.359839183 +0000 UTC m=+1196.156198866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.420116 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461234 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461378 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461427 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461500 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461559 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461634 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462089 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462277 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462418 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462452 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.463756 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.470852 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p" (OuterVolumeSpecName: "kube-api-access-2xh9p") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "kube-api-access-2xh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.475227 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.563980 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564022 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564032 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564042 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564053 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.576439 4913 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.665986 4913 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801162 4913 generic.go:334] "Generic (PLEG): container finished" podID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" exitCode=0 Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801242 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801287 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerDied","Data":"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa"} Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801382 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerDied","Data":"5f5d4f1ef26e68f7b2d31a9b3d84d0da1ff312a47ab5657edc54afc49f04f096"} Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801410 4913 scope.go:117] "RemoveContainer" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.806267 4913 generic.go:334] "Generic (PLEG): container finished" podID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" exitCode=1 Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.806381 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.806747 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerDied","Data":"d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275"} Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.807154 4913 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" secret="" err="secret \"galera-openstack-dockercfg-6gtwj\" not found" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.807213 4913 scope.go:117] "RemoveContainer" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.807535 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystonec9ef-account-delete-g88bp_cinder-kuttl-tests(1bcf0783-d151-4d4d-ad95-5671ec458c85)\"" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.843010 4913 scope.go:117] "RemoveContainer" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.866157 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.871688 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.900541 4913 scope.go:117] "RemoveContainer" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.902550 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa\": container with ID starting with b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa not found: ID does not exist" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.902648 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa"} err="failed to get container status \"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa\": rpc error: code = NotFound desc = could not find container \"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa\": container with ID starting with b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa not found: ID does not exist" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.902689 4913 scope.go:117] "RemoveContainer" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.903762 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49\": container with ID starting with 00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49 not found: ID does not exist" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.903809 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49"} err="failed to get container status \"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49\": rpc error: code = NotFound desc = could not find container \"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49\": container with ID starting with 00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49 not found: ID does not exist" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.903836 4913 scope.go:117] "RemoveContainer" containerID="5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.927184 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.931447 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.937687 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.972839 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" containerID="cri-o://f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" gracePeriod=604800 Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.972952 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.973043 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.973175 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.973261 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:07.97324214 +0000 UTC m=+1197.769601823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.974822 4913 projected.go:194] Error preparing data for projected volume kube-api-access-nltwk for pod cinder-kuttl-tests/root-account-create-update-f9cwc: failed to fetch token: pod "root-account-create-update-f9cwc" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.974896 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:07.974874483 +0000 UTC m=+1197.771234176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nltwk" (UniqueName: "kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : failed to fetch token: pod "root-account-create-update-f9cwc" not found Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.074485 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.074532 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.319188 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379365 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"ac820b36-83fb-44ca-97b0-6181846a5ef3\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379452 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"ac820b36-83fb-44ca-97b0-6181846a5ef3\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379525 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"ac820b36-83fb-44ca-97b0-6181846a5ef3\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379934 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data" (OuterVolumeSpecName: "config-data") pod "ac820b36-83fb-44ca-97b0-6181846a5ef3" (UID: "ac820b36-83fb-44ca-97b0-6181846a5ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.380078 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.380103 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ac820b36-83fb-44ca-97b0-6181846a5ef3" (UID: "ac820b36-83fb-44ca-97b0-6181846a5ef3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.380112 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.380130 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:08.380112237 +0000 UTC m=+1198.176471910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.384629 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747" (OuterVolumeSpecName: "kube-api-access-tx747") pod "ac820b36-83fb-44ca-97b0-6181846a5ef3" (UID: "ac820b36-83fb-44ca-97b0-6181846a5ef3"). InnerVolumeSpecName "kube-api-access-tx747". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.456674 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-1" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" containerID="cri-o://fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" gracePeriod=28 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.481236 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.481279 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.536906 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" path="/var/lib/kubelet/pods/06b1fd9b-951d-4d8e-8a08-4a2e8d820370/volumes" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.537787 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b33253-3e4a-44e4-9354-e293f4758d78" path="/var/lib/kubelet/pods/91b33253-3e4a-44e4-9354-e293f4758d78/volumes" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.827452 4913 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" secret="" err="secret \"galera-openstack-dockercfg-6gtwj\" not found" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.827501 4913 scope.go:117] "RemoveContainer" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.827718 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystonec9ef-account-delete-g88bp_cinder-kuttl-tests(1bcf0783-d151-4d4d-ad95-5671ec458c85)\"" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.830772 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.831019 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" containerID="cri-o://19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" gracePeriod=10 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.851475 4913 generic.go:334] "Generic (PLEG): container finished" podID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerID="d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588" exitCode=0 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.851619 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerDied","Data":"d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588"} Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860128 4913 generic.go:334] "Generic (PLEG): container finished" podID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" exitCode=0 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860185 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerDied","Data":"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3"} Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860208 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerDied","Data":"7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92"} Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860223 4913 scope.go:117] "RemoveContainer" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860305 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.883077 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.892482 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.902797 4913 scope.go:117] "RemoveContainer" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.909231 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3\": container with ID starting with 9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3 not found: ID does not exist" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.909276 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3"} err="failed to get container status \"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3\": rpc error: code = NotFound desc = could not find container \"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3\": container with ID starting with 9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3 not found: ID does not exist" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.153700 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.154290 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-index-4jlfb" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" containerID="cri-o://9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" gracePeriod=30 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.194065 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.204152 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.371335 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.377024 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497485 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497541 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497577 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497632 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497684 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497703 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497741 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497789 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.505693 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77" (OuterVolumeSpecName: "kube-api-access-z6l77") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "kube-api-access-z6l77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.506830 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.506823 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.507252 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts" (OuterVolumeSpecName: "scripts") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.508198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" (UID: "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.509255 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" (UID: "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.517747 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7" (OuterVolumeSpecName: "kube-api-access-4c7c7") pod "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" (UID: "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d"). InnerVolumeSpecName "kube-api-access-4c7c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.519916 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data" (OuterVolumeSpecName: "config-data") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.566782 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599693 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599729 4913 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599740 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599750 4913 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599759 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599767 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599776 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599785 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.608499 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.700910 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701193 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701212 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701270 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"4f61c697-fbcc-4e33-929b-03eacd477d73\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701290 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701324 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701340 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701664 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701726 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701752 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702070 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702636 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702915 4913 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702930 4913 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702939 4913 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.704332 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info" (OuterVolumeSpecName: "pod-info") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.704364 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7" (OuterVolumeSpecName: "kube-api-access-n4vm7") pod "4f61c697-fbcc-4e33-929b-03eacd477d73" (UID: "4f61c697-fbcc-4e33-929b-03eacd477d73"). InnerVolumeSpecName "kube-api-access-n4vm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.704434 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.705476 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc" (OuterVolumeSpecName: "kube-api-access-rgplc") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "kube-api-access-rgplc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.712093 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965" (OuterVolumeSpecName: "persistence") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "pvc-aba37d31-e514-4471-ba72-3f2eeef63965". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.759504 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804331 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804408 4913 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804429 4913 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804446 4913 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804464 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804534 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") on node \"crc\" " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.828527 4913 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.828824 4913 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-aba37d31-e514-4471-ba72-3f2eeef63965" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965") on node "crc" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.877947 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerDied","Data":"319d2fc6458bad5a006b1117b9ecf9841ebe516000026a3a782671bea30c10cd"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.878007 4913 scope.go:117] "RemoveContainer" containerID="d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.878016 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883197 4913 generic.go:334] "Generic (PLEG): container finished" podID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" exitCode=0 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883237 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883304 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerDied","Data":"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883414 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerDied","Data":"d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889412 4913 generic.go:334] "Generic (PLEG): container finished" podID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" exitCode=0 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889480 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerDied","Data":"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889503 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889504 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerDied","Data":"f82866f4a640b05de032b2242387c51f49628b80b3fcaf42729718719aa9d672"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.891972 4913 generic.go:334] "Generic (PLEG): container finished" podID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" exitCode=0 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.892005 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerDied","Data":"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.892025 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerDied","Data":"f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.892075 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.905382 4913 reconciler_common.go:293] "Volume detached for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.914865 4913 scope.go:117] "RemoveContainer" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.934137 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.941745 4913 scope.go:117] "RemoveContainer" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" Jan 21 06:55:07 crc kubenswrapper[4913]: E0121 06:55:07.942800 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d\": container with ID starting with 9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d not found: ID does not exist" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.942844 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d"} err="failed to get container status \"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d\": rpc error: code = NotFound desc = could not find container \"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d\": container with ID starting with 9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d not found: ID does not exist" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.942877 4913 scope.go:117] "RemoveContainer" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.948018 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.958424 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.965496 4913 scope.go:117] "RemoveContainer" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" Jan 21 06:55:07 crc kubenswrapper[4913]: E0121 06:55:07.966438 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a\": container with ID starting with 19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a not found: ID does not exist" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.966485 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a"} err="failed to get container status \"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a\": rpc error: code = NotFound desc = could not find container \"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a\": container with ID starting with 19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a not found: ID does not exist" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.966510 4913 scope.go:117] "RemoveContainer" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.971165 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.976326 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.982057 4913 scope.go:117] "RemoveContainer" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.983227 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.989448 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.994010 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.999395 4913 scope.go:117] "RemoveContainer" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" Jan 21 06:55:07 crc kubenswrapper[4913]: E0121 06:55:07.999886 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54\": container with ID starting with f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54 not found: ID does not exist" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:07.999953 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54"} err="failed to get container status \"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54\": rpc error: code = NotFound desc = could not find container \"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54\": container with ID starting with f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54 not found: ID does not exist" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:07.999983 4913 scope.go:117] "RemoveContainer" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.000467 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d\": container with ID starting with f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d not found: ID does not exist" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.000514 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d"} err="failed to get container status \"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d\": rpc error: code = NotFound desc = could not find container \"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d\": container with ID starting with f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d not found: ID does not exist" Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.095345 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.096926 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.097423 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.097501 4913 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" probeType="Readiness" pod="cinder-kuttl-tests/openstack-galera-1" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.377830 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.411636 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.411697 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.411681955 +0000 UTC m=+1202.208041628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.491777 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.495010 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-0" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" containerID="cri-o://08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" gracePeriod=26 Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.497097 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.502541 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.506438 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.511079 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.511900 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.511944 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512011 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512070 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512093 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.513017 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.514204 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.516198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.516844 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.519577 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc" (OuterVolumeSpecName: "kube-api-access-hs7jc") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "kube-api-access-hs7jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.524234 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.539816 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" path="/var/lib/kubelet/pods/01ea35b3-9885-4acc-bed4-05b6213940be/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.541411 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" path="/var/lib/kubelet/pods/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.542342 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" path="/var/lib/kubelet/pods/15e33604-9af2-42b5-b1ad-ecd76d4898d4/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.543913 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" path="/var/lib/kubelet/pods/4f61c697-fbcc-4e33-929b-03eacd477d73/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.544704 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" path="/var/lib/kubelet/pods/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.545670 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" path="/var/lib/kubelet/pods/8ce82f18-1e1d-40f1-8207-428ea9445bc3/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.547209 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" path="/var/lib/kubelet/pods/ac820b36-83fb-44ca-97b0-6181846a5ef3/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.548026 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" path="/var/lib/kubelet/pods/fde82b66-4c57-4f59-839e-5ccb89d18944/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613524 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613558 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613571 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613613 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613628 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613640 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.628504 4913 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.714570 4913 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.781163 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909493 4913 generic.go:334] "Generic (PLEG): container finished" podID="edaae817-2cda-4274-bad0-53165cffa224" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" exitCode=0 Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909668 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909670 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerDied","Data":"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94"} Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909898 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerDied","Data":"9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d"} Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909955 4913 scope.go:117] "RemoveContainer" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.914526 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerDied","Data":"337073b67ec506202df85dc5fd188199394795c2d0bc028de5810b4d603092f0"} Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.914702 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.916214 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"1bcf0783-d151-4d4d-ad95-5671ec458c85\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.916421 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"1bcf0783-d151-4d4d-ad95-5671ec458c85\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.918707 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bcf0783-d151-4d4d-ad95-5671ec458c85" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.923272 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs" (OuterVolumeSpecName: "kube-api-access-mttcs") pod "1bcf0783-d151-4d4d-ad95-5671ec458c85" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85"). InnerVolumeSpecName "kube-api-access-mttcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.968978 4913 scope.go:117] "RemoveContainer" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.971082 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.975871 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.002879 4913 scope.go:117] "RemoveContainer" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.003404 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94\": container with ID starting with fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 not found: ID does not exist" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003473 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94"} err="failed to get container status \"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94\": rpc error: code = NotFound desc = could not find container \"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94\": container with ID starting with fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 not found: ID does not exist" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003513 4913 scope.go:117] "RemoveContainer" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.003866 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf\": container with ID starting with b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf not found: ID does not exist" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003907 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf"} err="failed to get container status \"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf\": rpc error: code = NotFound desc = could not find container \"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf\": container with ID starting with b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf not found: ID does not exist" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003932 4913 scope.go:117] "RemoveContainer" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.019289 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.019313 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.233789 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.248554 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.252920 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.321841 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.321916 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322025 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322043 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322089 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322685 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322767 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.323070 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.323982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.325911 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd" (OuterVolumeSpecName: "kube-api-access-5fzmd") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "kube-api-access-5fzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.332738 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424251 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424307 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424323 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424334 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424343 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424353 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.434895 4913 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.525506 4913 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929145 4913 generic.go:334] "Generic (PLEG): container finished" podID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" exitCode=0 Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerDied","Data":"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2"} Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerDied","Data":"275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac"} Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929251 4913 scope.go:117] "RemoveContainer" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929248 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.955851 4913 scope.go:117] "RemoveContainer" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.969818 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.975049 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.996704 4913 scope.go:117] "RemoveContainer" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.997111 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2\": container with ID starting with 08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2 not found: ID does not exist" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.997145 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2"} err="failed to get container status \"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2\": rpc error: code = NotFound desc = could not find container \"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2\": container with ID starting with 08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2 not found: ID does not exist" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.997173 4913 scope.go:117] "RemoveContainer" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.997829 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8\": container with ID starting with 11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8 not found: ID does not exist" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.997898 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8"} err="failed to get container status \"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8\": rpc error: code = NotFound desc = could not find container \"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8\": container with ID starting with 11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8 not found: ID does not exist" Jan 21 06:55:10 crc kubenswrapper[4913]: I0121 06:55:10.538552 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" path="/var/lib/kubelet/pods/1bcf0783-d151-4d4d-ad95-5671ec458c85/volumes" Jan 21 06:55:10 crc kubenswrapper[4913]: I0121 06:55:10.539560 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" path="/var/lib/kubelet/pods/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c/volumes" Jan 21 06:55:10 crc kubenswrapper[4913]: I0121 06:55:10.540197 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edaae817-2cda-4274-bad0-53165cffa224" path="/var/lib/kubelet/pods/edaae817-2cda-4274-bad0-53165cffa224/volumes" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.020394 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.020761 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" containerID="cri-o://e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" gracePeriod=10 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.312096 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.312253 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-nvvrn" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" containerID="cri-o://4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" gracePeriod=30 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.381652 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.399802 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.546486 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.662986 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"2eed1c9d-583b-4678-a6d4-25ede526deb2\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.663052 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"2eed1c9d-583b-4678-a6d4-25ede526deb2\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.663115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"2eed1c9d-583b-4678-a6d4-25ede526deb2\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.672820 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm" (OuterVolumeSpecName: "kube-api-access-69pvm") pod "2eed1c9d-583b-4678-a6d4-25ede526deb2" (UID: "2eed1c9d-583b-4678-a6d4-25ede526deb2"). InnerVolumeSpecName "kube-api-access-69pvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.675630 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2eed1c9d-583b-4678-a6d4-25ede526deb2" (UID: "2eed1c9d-583b-4678-a6d4-25ede526deb2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.675730 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "2eed1c9d-583b-4678-a6d4-25ede526deb2" (UID: "2eed1c9d-583b-4678-a6d4-25ede526deb2"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.707270 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.764704 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"aafa8ec9-8d47-454f-ade6-cc83939b040d\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.765042 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.765069 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.765081 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.767724 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr" (OuterVolumeSpecName: "kube-api-access-d5nnr") pod "aafa8ec9-8d47-454f-ade6-cc83939b040d" (UID: "aafa8ec9-8d47-454f-ade6-cc83939b040d"). InnerVolumeSpecName "kube-api-access-d5nnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.866252 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960220 4913 generic.go:334] "Generic (PLEG): container finished" podID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" exitCode=0 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960299 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960302 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerDied","Data":"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960408 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerDied","Data":"5f11053bf6e8005edf5c878b1053cb5b2f458f735b16ba02d777871ab59cfd24"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960431 4913 scope.go:117] "RemoveContainer" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962525 4913 generic.go:334] "Generic (PLEG): container finished" podID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" exitCode=0 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962574 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerDied","Data":"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962620 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerDied","Data":"13b2addf8c21bece7103dc74546b4b535b876e4503f539b9501595a1b88972a6"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962638 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.979837 4913 scope.go:117] "RemoveContainer" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" Jan 21 06:55:11 crc kubenswrapper[4913]: E0121 06:55:11.980451 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2\": container with ID starting with e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2 not found: ID does not exist" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.980502 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2"} err="failed to get container status \"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2\": rpc error: code = NotFound desc = could not find container \"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2\": container with ID starting with e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2 not found: ID does not exist" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.980534 4913 scope.go:117] "RemoveContainer" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.991900 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.996924 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.001894 4913 scope.go:117] "RemoveContainer" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" Jan 21 06:55:12 crc kubenswrapper[4913]: E0121 06:55:12.003929 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741\": container with ID starting with 4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741 not found: ID does not exist" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.003973 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741"} err="failed to get container status \"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741\": rpc error: code = NotFound desc = could not find container \"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741\": container with ID starting with 4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741 not found: ID does not exist" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.008612 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.016691 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.374122 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.71:5672: i/o timeout" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.534502 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" path="/var/lib/kubelet/pods/2eed1c9d-583b-4678-a6d4-25ede526deb2/volumes" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.534976 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" path="/var/lib/kubelet/pods/aafa8ec9-8d47-454f-ade6-cc83939b040d/volumes" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.535423 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" path="/var/lib/kubelet/pods/e3feb49b-10bf-4116-91b9-e9b726161892/volumes" Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.721076 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.721638 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" containerID="cri-o://3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630" gracePeriod=10 Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.981290 4913 generic.go:334] "Generic (PLEG): container finished" podID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerID="3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630" exitCode=0 Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.981339 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerDied","Data":"3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630"} Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.002859 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.003117 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" containerID="cri-o://9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" gracePeriod=30 Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.033848 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.041668 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.237889 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.296018 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"f401b62e-8ebd-413e-a383-d9e74626c3d4\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.303307 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9" (OuterVolumeSpecName: "kube-api-access-9c6s9") pod "f401b62e-8ebd-413e-a383-d9e74626c3d4" (UID: "f401b62e-8ebd-413e-a383-d9e74626c3d4"). InnerVolumeSpecName "kube-api-access-9c6s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.399850 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.412607 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.501011 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"d5725475-8b61-45a7-91e8-1d28e9042910\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.503853 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68" (OuterVolumeSpecName: "kube-api-access-ltk68") pod "d5725475-8b61-45a7-91e8-1d28e9042910" (UID: "d5725475-8b61-45a7-91e8-1d28e9042910"). InnerVolumeSpecName "kube-api-access-ltk68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.534381 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" path="/var/lib/kubelet/pods/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff/volumes" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.602743 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003459 4913 generic.go:334] "Generic (PLEG): container finished" podID="d5725475-8b61-45a7-91e8-1d28e9042910" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" exitCode=0 Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003548 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003634 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerDied","Data":"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30"} Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003720 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerDied","Data":"f8fbd38ff1590a71df6d9f315408484aabe627daa085b88b69fdaab05a28c092"} Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003747 4913 scope.go:117] "RemoveContainer" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.005803 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerDied","Data":"35b959c40aff587948c5fd74b98b898c0bc76e951ec34079c1bec3b80111a1d1"} Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.005865 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.036919 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.036965 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.048155 4913 scope.go:117] "RemoveContainer" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" Jan 21 06:55:15 crc kubenswrapper[4913]: E0121 06:55:15.048735 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30\": container with ID starting with 9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30 not found: ID does not exist" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.048803 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30"} err="failed to get container status \"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30\": rpc error: code = NotFound desc = could not find container \"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30\": container with ID starting with 9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30 not found: ID does not exist" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.048842 4913 scope.go:117] "RemoveContainer" containerID="3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.050977 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.061688 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:55:16 crc kubenswrapper[4913]: I0121 06:55:16.536882 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" path="/var/lib/kubelet/pods/d5725475-8b61-45a7-91e8-1d28e9042910/volumes" Jan 21 06:55:16 crc kubenswrapper[4913]: I0121 06:55:16.538418 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" path="/var/lib/kubelet/pods/f401b62e-8ebd-413e-a383-d9e74626c3d4/volumes" Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.501019 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.501503 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" containerID="cri-o://8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f" gracePeriod=10 Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.781455 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.781950 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-9rr22" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" containerID="cri-o://2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f" gracePeriod=30 Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.823264 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.827894 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.045703 4913 generic.go:334] "Generic (PLEG): container finished" podID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerID="2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f" exitCode=0 Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.045780 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerDied","Data":"2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f"} Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.047827 4913 generic.go:334] "Generic (PLEG): container finished" podID="0400ab56-f59a-4483-83d7-56db6e482138" containerID="8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f" exitCode=0 Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.047857 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerDied","Data":"8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f"} Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.466867 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.572503 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"0400ab56-f59a-4483-83d7-56db6e482138\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.572568 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"0400ab56-f59a-4483-83d7-56db6e482138\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.572621 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"0400ab56-f59a-4483-83d7-56db6e482138\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.577764 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0400ab56-f59a-4483-83d7-56db6e482138" (UID: "0400ab56-f59a-4483-83d7-56db6e482138"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.578809 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg" (OuterVolumeSpecName: "kube-api-access-c2srg") pod "0400ab56-f59a-4483-83d7-56db6e482138" (UID: "0400ab56-f59a-4483-83d7-56db6e482138"). InnerVolumeSpecName "kube-api-access-c2srg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.581740 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "0400ab56-f59a-4483-83d7-56db6e482138" (UID: "0400ab56-f59a-4483-83d7-56db6e482138"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.676232 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.676515 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.676524 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.722448 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.777764 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.782248 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq" (OuterVolumeSpecName: "kube-api-access-dwvxq") pod "c40a34d4-0ef1-4aff-bc37-87c27e191d1f" (UID: "c40a34d4-0ef1-4aff-bc37-87c27e191d1f"). InnerVolumeSpecName "kube-api-access-dwvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.879906 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.055545 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerDied","Data":"cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5"} Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.055615 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.055669 4913 scope.go:117] "RemoveContainer" containerID="2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.058121 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerDied","Data":"34dad13f69ec8ebac45464947f13649925c0f206b2f50748da475e0fdda03067"} Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.058200 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.072933 4913 scope.go:117] "RemoveContainer" containerID="8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.093842 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.098164 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.107054 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.110386 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.414906 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.415116 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" containerID="cri-o://c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" gracePeriod=10 Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.542459 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0400ab56-f59a-4483-83d7-56db6e482138" path="/var/lib/kubelet/pods/0400ab56-f59a-4483-83d7-56db6e482138/volumes" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.543117 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" path="/var/lib/kubelet/pods/c40a34d4-0ef1-4aff-bc37-87c27e191d1f/volumes" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.543761 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" path="/var/lib/kubelet/pods/f9a93fdf-fffb-4344-8ac8-81d8be41eea7/volumes" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.714871 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.715056 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-jqn8q" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" containerID="cri-o://08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118" gracePeriod=30 Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.741884 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.757102 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.859973 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.893182 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"463ce3c4-98b5-41f1-bf36-f271228094e5\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.893279 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"463ce3c4-98b5-41f1-bf36-f271228094e5\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.893318 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"463ce3c4-98b5-41f1-bf36-f271228094e5\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.897409 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "463ce3c4-98b5-41f1-bf36-f271228094e5" (UID: "463ce3c4-98b5-41f1-bf36-f271228094e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.898064 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "463ce3c4-98b5-41f1-bf36-f271228094e5" (UID: "463ce3c4-98b5-41f1-bf36-f271228094e5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.910738 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28" (OuterVolumeSpecName: "kube-api-access-9nr28") pod "463ce3c4-98b5-41f1-bf36-f271228094e5" (UID: "463ce3c4-98b5-41f1-bf36-f271228094e5"). InnerVolumeSpecName "kube-api-access-9nr28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.995212 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.995243 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.995255 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.067626 4913 generic.go:334] "Generic (PLEG): container finished" podID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerID="08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118" exitCode=0 Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.067685 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerDied","Data":"08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118"} Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069237 4913 generic.go:334] "Generic (PLEG): container finished" podID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" exitCode=0 Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069283 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerDied","Data":"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63"} Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069298 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerDied","Data":"cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4"} Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069317 4913 scope.go:117] "RemoveContainer" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069310 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.104485 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.105415 4913 scope.go:117] "RemoveContainer" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" Jan 21 06:55:21 crc kubenswrapper[4913]: E0121 06:55:21.105766 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63\": container with ID starting with c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63 not found: ID does not exist" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.105803 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63"} err="failed to get container status \"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63\": rpc error: code = NotFound desc = could not find container \"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63\": container with ID starting with c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63 not found: ID does not exist" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.116896 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.126435 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.197768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"211a0853-fb6a-4002-98be-aa01c99eaa7d\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.202511 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg" (OuterVolumeSpecName: "kube-api-access-4rhrg") pod "211a0853-fb6a-4002-98be-aa01c99eaa7d" (UID: "211a0853-fb6a-4002-98be-aa01c99eaa7d"). InnerVolumeSpecName "kube-api-access-4rhrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.299557 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.081582 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerDied","Data":"db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b"} Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.081692 4913 scope.go:117] "RemoveContainer" containerID="08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.081763 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.121635 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.124392 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.539646 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" path="/var/lib/kubelet/pods/211a0853-fb6a-4002-98be-aa01c99eaa7d/volumes" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.540903 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" path="/var/lib/kubelet/pods/463ce3c4-98b5-41f1-bf36-f271228094e5/volumes" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.542126 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" path="/var/lib/kubelet/pods/980a7b2a-b9d1-4935-ac4c-9ac4a4730138/volumes" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.249550 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250756 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250773 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250784 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250791 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250805 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250814 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250822 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250828 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250839 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250845 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250855 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250862 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250871 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250877 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250887 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250893 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250901 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250907 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250915 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250921 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250929 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250935 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250944 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="setup-container" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250950 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="setup-container" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250957 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250965 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250974 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250982 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250996 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251004 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251012 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251018 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251026 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251033 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251040 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251046 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251055 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251061 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251075 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251082 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251091 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251097 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251199 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251212 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251221 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251227 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251236 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251244 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251253 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251262 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251269 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251275 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251286 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251292 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251299 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251308 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251316 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251322 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251330 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251338 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251442 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251457 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.252139 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.255873 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ffklg"/"openshift-service-ca.crt" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.255877 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ffklg"/"default-dockercfg-wl5zr" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.258745 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ffklg"/"kube-root-ca.crt" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.308793 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.317615 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.317688 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.418535 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.418615 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.418964 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.442963 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.571978 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.820489 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 06:55:36 crc kubenswrapper[4913]: W0121 06:55:36.834788 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0860e960_b47d_4cea_9c37_d28691c9a4d9.slice/crio-63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3 WatchSource:0}: Error finding container 63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3: Status 404 returned error can't find the container with id 63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3 Jan 21 06:55:37 crc kubenswrapper[4913]: I0121 06:55:37.217279 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerStarted","Data":"63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3"} Jan 21 06:55:43 crc kubenswrapper[4913]: I0121 06:55:43.259285 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerStarted","Data":"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c"} Jan 21 06:55:43 crc kubenswrapper[4913]: I0121 06:55:43.259890 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerStarted","Data":"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60"} Jan 21 06:55:43 crc kubenswrapper[4913]: I0121 06:55:43.294951 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ffklg/must-gather-ftkd9" podStartSLOduration=1.6540018330000001 podStartE2EDuration="7.294927069s" podCreationTimestamp="2026-01-21 06:55:36 +0000 UTC" firstStartedPulling="2026-01-21 06:55:36.836111874 +0000 UTC m=+1226.632471547" lastFinishedPulling="2026-01-21 06:55:42.47703711 +0000 UTC m=+1232.273396783" observedRunningTime="2026-01-21 06:55:43.278013332 +0000 UTC m=+1233.074373025" watchObservedRunningTime="2026-01-21 06:55:43.294927069 +0000 UTC m=+1233.091286752" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.591043 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.605098 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.612089 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.633361 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.125952 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.137652 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.143319 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.151646 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.159893 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.169104 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.176045 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.182475 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.213519 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.222536 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.377245 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.383074 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 06:55:56 crc kubenswrapper[4913]: E0121 06:55:56.555077 4913 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.029s" Jan 21 06:56:00 crc kubenswrapper[4913]: I0121 06:56:00.101510 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 06:56:00 crc kubenswrapper[4913]: I0121 06:56:00.118808 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 06:56:00 crc kubenswrapper[4913]: I0121 06:56:00.124523 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 06:56:08 crc kubenswrapper[4913]: I0121 06:56:08.319543 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:56:08 crc kubenswrapper[4913]: I0121 06:56:08.320271 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.750969 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.757227 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.769883 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.798583 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.230636 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.242869 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.246992 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.255640 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.261270 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.267762 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.274249 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.281955 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.306779 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.316332 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.426764 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.436420 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.592236 4913 scope.go:117] "RemoveContainer" containerID="e2875581fbd572dea4f4e410e08bce794cd12bf464303d41fbc9d66b0d7fcef6" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.619617 4913 scope.go:117] "RemoveContainer" containerID="7ea6d30dbbc206edb2f162346b01f1b70cea4ff52c09855b5688ceae555cd86f" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.640979 4913 scope.go:117] "RemoveContainer" containerID="ab7eba0415a79bbb3100d97d9966a99002b3c45fe402ca2d92dfeca4328093d3" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.661664 4913 scope.go:117] "RemoveContainer" containerID="37009c48c11ee62bd23237579f9cc9c8d427c5cbaddb700f28802586ebc40376" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.683430 4913 scope.go:117] "RemoveContainer" containerID="436316e77fad673adee43600b81c8e8cb659f723e40fde5ac692b7f2f5e51c80" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.703213 4913 scope.go:117] "RemoveContainer" containerID="e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.720882 4913 scope.go:117] "RemoveContainer" containerID="660368d7d30a6dcd15b89683468c16579bae9e6ba5e62cde1ef85f9aba8de9d8" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.743258 4913 scope.go:117] "RemoveContainer" containerID="78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.760019 4913 scope.go:117] "RemoveContainer" containerID="ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.775076 4913 scope.go:117] "RemoveContainer" containerID="d9191512905a50023a8bd3340913a6390b0e97c743493bde552499fe3bccd78f" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.208503 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/extract/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.221020 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/util/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.229283 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/pull/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.500885 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/registry-server/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.506050 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-utilities/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.513737 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-content/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.744370 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/registry-server/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.749074 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-utilities/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.754995 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-content/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.768573 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mmmzm_9850b956-f0a1-4e29-b5c2-703b0aa7b697/marketplace-operator/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.823215 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/registry-server/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.828033 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-utilities/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.837008 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-content/0.log" Jan 21 06:56:22 crc kubenswrapper[4913]: I0121 06:56:22.108175 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/registry-server/0.log" Jan 21 06:56:22 crc kubenswrapper[4913]: I0121 06:56:22.112820 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-utilities/0.log" Jan 21 06:56:22 crc kubenswrapper[4913]: I0121 06:56:22.120952 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-content/0.log" Jan 21 06:56:38 crc kubenswrapper[4913]: I0121 06:56:38.319347 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:56:38 crc kubenswrapper[4913]: I0121 06:56:38.320006 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.599794 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.607140 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.625333 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.642048 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.113212 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.122624 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.127474 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.136085 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.143376 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.149700 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.156182 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.163907 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.193237 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.201789 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.326305 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.334716 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.993648 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 06:56:43 crc kubenswrapper[4913]: I0121 06:56:43.011348 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 06:56:43 crc kubenswrapper[4913]: I0121 06:56:43.020561 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.035319 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/kube-multus-additional-cni-plugins/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.042084 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/egress-router-binary-copy/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.048862 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/cni-plugins/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.055807 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/bond-cni-plugin/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.063884 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/routeoverride-cni/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.072308 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni-bincopy/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.080964 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.097354 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/multus-admission-controller/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.102566 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/kube-rbac-proxy/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.157186 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/3.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.160284 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.192699 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/network-metrics-daemon/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.198931 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/kube-rbac-proxy/0.log" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.318730 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.319257 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.319311 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.320065 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.320156 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800" gracePeriod=600 Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.057669 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800" exitCode=0 Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.057749 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800"} Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.058114 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64"} Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.058140 4913 scope.go:117] "RemoveContainer" containerID="bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6" Jan 21 06:57:12 crc kubenswrapper[4913]: I0121 06:57:12.903841 4913 scope.go:117] "RemoveContainer" containerID="f64e19c7af4171a78023ca3711a7eec83f0f3b9547ff3c69e634b90c2c0582db" Jan 21 06:57:12 crc kubenswrapper[4913]: I0121 06:57:12.936812 4913 scope.go:117] "RemoveContainer" containerID="461bda799565e5924857f0b3e4f758b75acec0c9a9a9ac5312facf66ecd33abe" Jan 21 06:57:12 crc kubenswrapper[4913]: I0121 06:57:12.978050 4913 scope.go:117] "RemoveContainer" containerID="60ac37c77e23483afc0614ffbcd77f3112a7195bb5009179ec07fc76cbf42d75" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.037552 4913 scope.go:117] "RemoveContainer" containerID="65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.067053 4913 scope.go:117] "RemoveContainer" containerID="a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.096046 4913 scope.go:117] "RemoveContainer" containerID="92a35170c3a228e725dc4577bc820bf64539f262c32a337b31f25d4c32fe9af7" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.118163 4913 scope.go:117] "RemoveContainer" containerID="b5c65ed731440220892793dc0f5f5c1250a99d03d67a71b6685779fcad076adc" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.137385 4913 scope.go:117] "RemoveContainer" containerID="e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.156938 4913 scope.go:117] "RemoveContainer" containerID="57ae751f0ac8e317da709793a9908e104d2805bb250e026f326c599fd971bccb" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.191086 4913 scope.go:117] "RemoveContainer" containerID="659f77de9cd51e636b4491f066c76f374e6bc4986d8367fb979a0e570225f47e" Jan 21 06:59:08 crc kubenswrapper[4913]: I0121 06:59:08.319124 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:59:08 crc kubenswrapper[4913]: I0121 06:59:08.320032 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:59:13 crc kubenswrapper[4913]: I0121 06:59:13.269458 4913 scope.go:117] "RemoveContainer" containerID="9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a" Jan 21 06:59:13 crc kubenswrapper[4913]: I0121 06:59:13.330662 4913 scope.go:117] "RemoveContainer" containerID="bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613" Jan 21 06:59:13 crc kubenswrapper[4913]: I0121 06:59:13.347400 4913 scope.go:117] "RemoveContainer" containerID="adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79" Jan 21 06:59:38 crc kubenswrapper[4913]: I0121 06:59:38.318480 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:59:38 crc kubenswrapper[4913]: I0121 06:59:38.318967 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.159909 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj"] Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.162470 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.164218 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.168438 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.180210 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj"] Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.220669 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.220709 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.220769 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.321817 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.321896 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.321924 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.326913 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.335582 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.338797 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.511374 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.730277 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj"] Jan 21 07:00:00 crc kubenswrapper[4913]: W0121 07:00:00.741270 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a66fbcc_cc17_4d20_bb3a_36d3f9ad2a90.slice/crio-3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2 WatchSource:0}: Error finding container 3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2: Status 404 returned error can't find the container with id 3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2 Jan 21 07:00:01 crc kubenswrapper[4913]: I0121 07:00:01.370842 4913 generic.go:334] "Generic (PLEG): container finished" podID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerID="6ff2eb7824bb9caeb6a07d4f757d7b8e5c74d440d7ed3872dc95969b6f7c97b2" exitCode=0 Jan 21 07:00:01 crc kubenswrapper[4913]: I0121 07:00:01.370978 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" event={"ID":"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90","Type":"ContainerDied","Data":"6ff2eb7824bb9caeb6a07d4f757d7b8e5c74d440d7ed3872dc95969b6f7c97b2"} Jan 21 07:00:01 crc kubenswrapper[4913]: I0121 07:00:01.371174 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" event={"ID":"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90","Type":"ContainerStarted","Data":"3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2"} Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.656665 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.856800 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.856893 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.856977 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.858136 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" (UID: "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.865681 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4" (OuterVolumeSpecName: "kube-api-access-pxpm4") pod "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" (UID: "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90"). InnerVolumeSpecName "kube-api-access-pxpm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.866031 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" (UID: "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.958344 4913 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.958386 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.958404 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:03 crc kubenswrapper[4913]: I0121 07:00:03.388956 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" event={"ID":"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90","Type":"ContainerDied","Data":"3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2"} Jan 21 07:00:03 crc kubenswrapper[4913]: I0121 07:00:03.389012 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2" Jan 21 07:00:03 crc kubenswrapper[4913]: I0121 07:00:03.389066 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.319713 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.319822 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.319892 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.320752 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.320882 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" gracePeriod=600 Jan 21 07:00:08 crc kubenswrapper[4913]: E0121 07:00:08.525237 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.462779 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" exitCode=0 Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.462830 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64"} Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.462989 4913 scope.go:117] "RemoveContainer" containerID="a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800" Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.464115 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:09 crc kubenswrapper[4913]: E0121 07:00:09.464895 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:24 crc kubenswrapper[4913]: I0121 07:00:24.526799 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:24 crc kubenswrapper[4913]: E0121 07:00:24.528379 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:36 crc kubenswrapper[4913]: I0121 07:00:36.527438 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:36 crc kubenswrapper[4913]: E0121 07:00:36.528397 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.043537 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:00:38 crc kubenswrapper[4913]: E0121 07:00:38.043821 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerName="collect-profiles" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.043837 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerName="collect-profiles" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.044206 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerName="collect-profiles" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.047689 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.059247 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.206599 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.206699 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.206778 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.307831 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.307938 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.308004 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.308354 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.308685 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.336754 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.375290 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.606007 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.700197 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerStarted","Data":"44ae31c06ae6bf3ba2a16e82bee60b8ea3b82de96c830a79f05c92b64bfbd570"} Jan 21 07:00:39 crc kubenswrapper[4913]: I0121 07:00:39.711840 4913 generic.go:334] "Generic (PLEG): container finished" podID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" exitCode=0 Jan 21 07:00:39 crc kubenswrapper[4913]: I0121 07:00:39.711898 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761"} Jan 21 07:00:39 crc kubenswrapper[4913]: I0121 07:00:39.715313 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:00:40 crc kubenswrapper[4913]: I0121 07:00:40.719008 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerStarted","Data":"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18"} Jan 21 07:00:41 crc kubenswrapper[4913]: I0121 07:00:41.726371 4913 generic.go:334] "Generic (PLEG): container finished" podID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" exitCode=0 Jan 21 07:00:41 crc kubenswrapper[4913]: I0121 07:00:41.726476 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18"} Jan 21 07:00:42 crc kubenswrapper[4913]: I0121 07:00:42.737444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerStarted","Data":"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda"} Jan 21 07:00:42 crc kubenswrapper[4913]: I0121 07:00:42.769066 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9757r" podStartSLOduration=2.197128654 podStartE2EDuration="4.769036847s" podCreationTimestamp="2026-01-21 07:00:38 +0000 UTC" firstStartedPulling="2026-01-21 07:00:39.714920686 +0000 UTC m=+1529.511280389" lastFinishedPulling="2026-01-21 07:00:42.286828879 +0000 UTC m=+1532.083188582" observedRunningTime="2026-01-21 07:00:42.763733763 +0000 UTC m=+1532.560093456" watchObservedRunningTime="2026-01-21 07:00:42.769036847 +0000 UTC m=+1532.565396560" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.220570 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.223140 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.227222 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.386555 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.386670 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.386716 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488132 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488223 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488265 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488711 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488743 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.512565 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.548614 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.819659 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:44 crc kubenswrapper[4913]: W0121 07:00:44.823479 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf61318_ded2_4361_a2d3_cec7aeb2d44e.slice/crio-9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309 WatchSource:0}: Error finding container 9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309: Status 404 returned error can't find the container with id 9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309 Jan 21 07:00:45 crc kubenswrapper[4913]: I0121 07:00:45.764806 4913 generic.go:334] "Generic (PLEG): container finished" podID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" exitCode=0 Jan 21 07:00:45 crc kubenswrapper[4913]: I0121 07:00:45.764908 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce"} Jan 21 07:00:45 crc kubenswrapper[4913]: I0121 07:00:45.765191 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerStarted","Data":"9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309"} Jan 21 07:00:47 crc kubenswrapper[4913]: I0121 07:00:47.526429 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:47 crc kubenswrapper[4913]: E0121 07:00:47.526831 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.376304 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.376620 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.782435 4913 generic.go:334] "Generic (PLEG): container finished" podID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" exitCode=0 Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.782490 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148"} Jan 21 07:00:49 crc kubenswrapper[4913]: I0121 07:00:49.440641 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9757r" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" probeResult="failure" output=< Jan 21 07:00:49 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 07:00:49 crc kubenswrapper[4913]: > Jan 21 07:00:50 crc kubenswrapper[4913]: I0121 07:00:50.798176 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerStarted","Data":"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840"} Jan 21 07:00:50 crc kubenswrapper[4913]: I0121 07:00:50.818923 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smpb9" podStartSLOduration=2.786692871 podStartE2EDuration="6.818903745s" podCreationTimestamp="2026-01-21 07:00:44 +0000 UTC" firstStartedPulling="2026-01-21 07:00:45.767322477 +0000 UTC m=+1535.563682150" lastFinishedPulling="2026-01-21 07:00:49.799533321 +0000 UTC m=+1539.595893024" observedRunningTime="2026-01-21 07:00:50.814196996 +0000 UTC m=+1540.610556659" watchObservedRunningTime="2026-01-21 07:00:50.818903745 +0000 UTC m=+1540.615263418" Jan 21 07:00:53 crc kubenswrapper[4913]: I0121 07:00:53.824689 4913 generic.go:334] "Generic (PLEG): container finished" podID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" exitCode=0 Jan 21 07:00:53 crc kubenswrapper[4913]: I0121 07:00:53.824785 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerDied","Data":"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60"} Jan 21 07:00:53 crc kubenswrapper[4913]: I0121 07:00:53.825750 4913 scope.go:117] "RemoveContainer" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.549519 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.549573 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.562747 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ffklg_must-gather-ftkd9_0860e960-b47d-4cea-9c37-d28691c9a4d9/gather/0.log" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.608934 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.898825 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.956308 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:56 crc kubenswrapper[4913]: I0121 07:00:56.844894 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-smpb9" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" containerID="cri-o://9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" gracePeriod=2 Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.202391 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.362195 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.362334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.362374 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.363491 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities" (OuterVolumeSpecName: "utilities") pod "3bf61318-ded2-4361-a2d3-cec7aeb2d44e" (UID: "3bf61318-ded2-4361-a2d3-cec7aeb2d44e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.369800 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql" (OuterVolumeSpecName: "kube-api-access-wfjql") pod "3bf61318-ded2-4361-a2d3-cec7aeb2d44e" (UID: "3bf61318-ded2-4361-a2d3-cec7aeb2d44e"). InnerVolumeSpecName "kube-api-access-wfjql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.416504 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf61318-ded2-4361-a2d3-cec7aeb2d44e" (UID: "3bf61318-ded2-4361-a2d3-cec7aeb2d44e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.464207 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.464273 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.464298 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852134 4913 generic.go:334] "Generic (PLEG): container finished" podID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" exitCode=0 Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852559 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840"} Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852626 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309"} Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852649 4913 scope.go:117] "RemoveContainer" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852790 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.884979 4913 scope.go:117] "RemoveContainer" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.911689 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.917063 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.936889 4913 scope.go:117] "RemoveContainer" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.965875 4913 scope.go:117] "RemoveContainer" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" Jan 21 07:00:57 crc kubenswrapper[4913]: E0121 07:00:57.966440 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840\": container with ID starting with 9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840 not found: ID does not exist" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.966519 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840"} err="failed to get container status \"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840\": rpc error: code = NotFound desc = could not find container \"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840\": container with ID starting with 9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840 not found: ID does not exist" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.966557 4913 scope.go:117] "RemoveContainer" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" Jan 21 07:00:57 crc kubenswrapper[4913]: E0121 07:00:57.966988 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148\": container with ID starting with 9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148 not found: ID does not exist" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.967020 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148"} err="failed to get container status \"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148\": rpc error: code = NotFound desc = could not find container \"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148\": container with ID starting with 9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148 not found: ID does not exist" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.967042 4913 scope.go:117] "RemoveContainer" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" Jan 21 07:00:57 crc kubenswrapper[4913]: E0121 07:00:57.967912 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce\": container with ID starting with 48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce not found: ID does not exist" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.967957 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce"} err="failed to get container status \"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce\": rpc error: code = NotFound desc = could not find container \"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce\": container with ID starting with 48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce not found: ID does not exist" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.458713 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.512663 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.526738 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:58 crc kubenswrapper[4913]: E0121 07:00:58.527012 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.537295 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" path="/var/lib/kubelet/pods/3bf61318-ded2-4361-a2d3-cec7aeb2d44e/volumes" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.257501 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.260083 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9757r" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" containerID="cri-o://ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" gracePeriod=2 Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.701032 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.812963 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.813308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.813362 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.814627 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities" (OuterVolumeSpecName: "utilities") pod "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" (UID: "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.821123 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl" (OuterVolumeSpecName: "kube-api-access-5bxcl") pod "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" (UID: "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10"). InnerVolumeSpecName "kube-api-access-5bxcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.877702 4913 generic.go:334] "Generic (PLEG): container finished" podID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" exitCode=0 Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.877917 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda"} Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.878011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"44ae31c06ae6bf3ba2a16e82bee60b8ea3b82de96c830a79f05c92b64bfbd570"} Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.878082 4913 scope.go:117] "RemoveContainer" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.878156 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.900606 4913 scope.go:117] "RemoveContainer" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.915466 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.915831 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.921274 4913 scope.go:117] "RemoveContainer" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.930439 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" (UID: "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.943021 4913 scope.go:117] "RemoveContainer" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" Jan 21 07:01:00 crc kubenswrapper[4913]: E0121 07:01:00.944007 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda\": container with ID starting with ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda not found: ID does not exist" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944050 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda"} err="failed to get container status \"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda\": rpc error: code = NotFound desc = could not find container \"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda\": container with ID starting with ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda not found: ID does not exist" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944075 4913 scope.go:117] "RemoveContainer" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" Jan 21 07:01:00 crc kubenswrapper[4913]: E0121 07:01:00.944510 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18\": container with ID starting with 878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18 not found: ID does not exist" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944547 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18"} err="failed to get container status \"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18\": rpc error: code = NotFound desc = could not find container \"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18\": container with ID starting with 878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18 not found: ID does not exist" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944572 4913 scope.go:117] "RemoveContainer" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" Jan 21 07:01:00 crc kubenswrapper[4913]: E0121 07:01:00.945566 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761\": container with ID starting with 2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761 not found: ID does not exist" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.945620 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761"} err="failed to get container status \"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761\": rpc error: code = NotFound desc = could not find container \"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761\": container with ID starting with 2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761 not found: ID does not exist" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.022979 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.023434 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ffklg/must-gather-ftkd9" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" containerID="cri-o://db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" gracePeriod=2 Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.024098 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.027669 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.233920 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.238302 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.316295 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ffklg_must-gather-ftkd9_0860e960-b47d-4cea-9c37-d28691c9a4d9/copy/0.log" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.316803 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.327852 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"0860e960-b47d-4cea-9c37-d28691c9a4d9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.327970 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"0860e960-b47d-4cea-9c37-d28691c9a4d9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.332809 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r" (OuterVolumeSpecName: "kube-api-access-tc58r") pod "0860e960-b47d-4cea-9c37-d28691c9a4d9" (UID: "0860e960-b47d-4cea-9c37-d28691c9a4d9"). InnerVolumeSpecName "kube-api-access-tc58r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.404622 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0860e960-b47d-4cea-9c37-d28691c9a4d9" (UID: "0860e960-b47d-4cea-9c37-d28691c9a4d9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.429163 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.429230 4913 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.884753 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ffklg_must-gather-ftkd9_0860e960-b47d-4cea-9c37-d28691c9a4d9/copy/0.log" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.885466 4913 generic.go:334] "Generic (PLEG): container finished" podID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" exitCode=143 Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.885497 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.885522 4913 scope.go:117] "RemoveContainer" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.903650 4913 scope.go:117] "RemoveContainer" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.937499 4913 scope.go:117] "RemoveContainer" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" Jan 21 07:01:01 crc kubenswrapper[4913]: E0121 07:01:01.937969 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c\": container with ID starting with db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c not found: ID does not exist" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.938012 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c"} err="failed to get container status \"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c\": rpc error: code = NotFound desc = could not find container \"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c\": container with ID starting with db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c not found: ID does not exist" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.938036 4913 scope.go:117] "RemoveContainer" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:01:01 crc kubenswrapper[4913]: E0121 07:01:01.938356 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60\": container with ID starting with 7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60 not found: ID does not exist" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.938422 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60"} err="failed to get container status \"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60\": rpc error: code = NotFound desc = could not find container \"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60\": container with ID starting with 7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60 not found: ID does not exist" Jan 21 07:01:02 crc kubenswrapper[4913]: I0121 07:01:02.538727 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" path="/var/lib/kubelet/pods/0860e960-b47d-4cea-9c37-d28691c9a4d9/volumes" Jan 21 07:01:02 crc kubenswrapper[4913]: I0121 07:01:02.541165 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" path="/var/lib/kubelet/pods/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10/volumes" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.105531 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106429 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106447 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106460 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106469 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106487 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106496 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106511 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106519 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106531 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106538 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106553 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106561 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106576 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106584 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106619 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="gather" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106627 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="gather" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106745 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="gather" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106757 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106773 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106783 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.107706 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.130392 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.276369 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.276453 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.276520 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.377715 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.377816 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.377871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.378562 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.378560 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.399425 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.437001 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.840990 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.954804 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerStarted","Data":"c197ea729d234b260070af02e3d362e9f19e17a47868a2c5004ab193639c96af"} Jan 21 07:01:12 crc kubenswrapper[4913]: I0121 07:01:12.527238 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:12 crc kubenswrapper[4913]: E0121 07:01:12.527703 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:12 crc kubenswrapper[4913]: I0121 07:01:12.964093 4913 generic.go:334] "Generic (PLEG): container finished" podID="32ecbe88-f107-4b8b-b311-046170e29680" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" exitCode=0 Jan 21 07:01:12 crc kubenswrapper[4913]: I0121 07:01:12.964170 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09"} Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.431041 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.475015 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.511210 4913 scope.go:117] "RemoveContainer" containerID="2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.572853 4913 scope.go:117] "RemoveContainer" containerID="432a97f97a1e748db15a8c859dcaa7de8838a131f61c83acbb060114eb9ecddf" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.597641 4913 scope.go:117] "RemoveContainer" containerID="9eca8ae0460adba99832950743728508ef374a3fcd006d3227af45af81c4c272" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.621990 4913 scope.go:117] "RemoveContainer" containerID="e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.645281 4913 scope.go:117] "RemoveContainer" containerID="f6cac025b126b4c0c411e321cbd1813b091d8ec15b54ab9a538e93d26406b363" Jan 21 07:01:13 crc kubenswrapper[4913]: E0121 07:01:13.947813 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ecbe88_f107_4b8b_b311_046170e29680.slice/crio-conmon-a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8.scope\": RecentStats: unable to find data in memory cache]" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.973344 4913 generic.go:334] "Generic (PLEG): container finished" podID="32ecbe88-f107-4b8b-b311-046170e29680" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" exitCode=0 Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.973393 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8"} Jan 21 07:01:14 crc kubenswrapper[4913]: I0121 07:01:14.981731 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerStarted","Data":"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476"} Jan 21 07:01:15 crc kubenswrapper[4913]: I0121 07:01:15.002346 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwzz9" podStartSLOduration=2.580253192 podStartE2EDuration="4.002331805s" podCreationTimestamp="2026-01-21 07:01:11 +0000 UTC" firstStartedPulling="2026-01-21 07:01:12.966121024 +0000 UTC m=+1562.762480727" lastFinishedPulling="2026-01-21 07:01:14.388199667 +0000 UTC m=+1564.184559340" observedRunningTime="2026-01-21 07:01:14.999868072 +0000 UTC m=+1564.796227765" watchObservedRunningTime="2026-01-21 07:01:15.002331805 +0000 UTC m=+1564.798691478" Jan 21 07:01:21 crc kubenswrapper[4913]: I0121 07:01:21.437942 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:21 crc kubenswrapper[4913]: I0121 07:01:21.438355 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:21 crc kubenswrapper[4913]: I0121 07:01:21.505253 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:22 crc kubenswrapper[4913]: I0121 07:01:22.091754 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:22 crc kubenswrapper[4913]: I0121 07:01:22.149982 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:24 crc kubenswrapper[4913]: I0121 07:01:24.039940 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwzz9" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" containerID="cri-o://a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" gracePeriod=2 Jan 21 07:01:24 crc kubenswrapper[4913]: I0121 07:01:24.526792 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:24 crc kubenswrapper[4913]: E0121 07:01:24.527118 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.020609 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.049907 4913 generic.go:334] "Generic (PLEG): container finished" podID="32ecbe88-f107-4b8b-b311-046170e29680" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.049954 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476"} Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.050339 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.051159 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"c197ea729d234b260070af02e3d362e9f19e17a47868a2c5004ab193639c96af"} Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.051188 4913 scope.go:117] "RemoveContainer" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.073048 4913 scope.go:117] "RemoveContainer" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.092905 4913 scope.go:117] "RemoveContainer" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.114930 4913 scope.go:117] "RemoveContainer" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" Jan 21 07:01:25 crc kubenswrapper[4913]: E0121 07:01:25.115811 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476\": container with ID starting with a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476 not found: ID does not exist" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.115889 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476"} err="failed to get container status \"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476\": rpc error: code = NotFound desc = could not find container \"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476\": container with ID starting with a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.115924 4913 scope.go:117] "RemoveContainer" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" Jan 21 07:01:25 crc kubenswrapper[4913]: E0121 07:01:25.116722 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8\": container with ID starting with a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8 not found: ID does not exist" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.116754 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8"} err="failed to get container status \"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8\": rpc error: code = NotFound desc = could not find container \"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8\": container with ID starting with a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.116772 4913 scope.go:117] "RemoveContainer" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" Jan 21 07:01:25 crc kubenswrapper[4913]: E0121 07:01:25.117259 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09\": container with ID starting with 9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09 not found: ID does not exist" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.117309 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09"} err="failed to get container status \"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09\": rpc error: code = NotFound desc = could not find container \"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09\": container with ID starting with 9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.165091 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"32ecbe88-f107-4b8b-b311-046170e29680\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.165316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"32ecbe88-f107-4b8b-b311-046170e29680\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.165359 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"32ecbe88-f107-4b8b-b311-046170e29680\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.166902 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities" (OuterVolumeSpecName: "utilities") pod "32ecbe88-f107-4b8b-b311-046170e29680" (UID: "32ecbe88-f107-4b8b-b311-046170e29680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.172599 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh" (OuterVolumeSpecName: "kube-api-access-xjqlh") pod "32ecbe88-f107-4b8b-b311-046170e29680" (UID: "32ecbe88-f107-4b8b-b311-046170e29680"). InnerVolumeSpecName "kube-api-access-xjqlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.203930 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ecbe88-f107-4b8b-b311-046170e29680" (UID: "32ecbe88-f107-4b8b-b311-046170e29680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.267210 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.267255 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.267274 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.386212 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.393295 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:26 crc kubenswrapper[4913]: I0121 07:01:26.537224 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ecbe88-f107-4b8b-b311-046170e29680" path="/var/lib/kubelet/pods/32ecbe88-f107-4b8b-b311-046170e29680/volumes" Jan 21 07:01:36 crc kubenswrapper[4913]: I0121 07:01:36.526970 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:36 crc kubenswrapper[4913]: E0121 07:01:36.528027 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:48 crc kubenswrapper[4913]: I0121 07:01:48.526421 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:48 crc kubenswrapper[4913]: E0121 07:01:48.527437 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.841488 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:01:58 crc kubenswrapper[4913]: E0121 07:01:58.842465 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-utilities" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842487 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-utilities" Jan 21 07:01:58 crc kubenswrapper[4913]: E0121 07:01:58.842509 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842519 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" Jan 21 07:01:58 crc kubenswrapper[4913]: E0121 07:01:58.842543 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-content" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842554 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-content" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842730 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.843531 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.846185 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5jnbm"/"kube-root-ca.crt" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.847059 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5jnbm"/"openshift-service-ca.crt" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.865580 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.865872 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.904161 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.966664 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.966757 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.967131 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.986283 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:59 crc kubenswrapper[4913]: I0121 07:01:59.164457 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:59 crc kubenswrapper[4913]: I0121 07:01:59.435224 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.317352 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerStarted","Data":"89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e"} Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.317681 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerStarted","Data":"7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2"} Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.317691 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerStarted","Data":"7f29b4e5874bc3b2f2cda0ec1c031c1c12eaaed7301c933bd23dc7304cbe535f"} Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.342533 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5jnbm/must-gather-96p7d" podStartSLOduration=2.342509416 podStartE2EDuration="2.342509416s" podCreationTimestamp="2026-01-21 07:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:02:00.341342176 +0000 UTC m=+1610.137701849" watchObservedRunningTime="2026-01-21 07:02:00.342509416 +0000 UTC m=+1610.138869109" Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.531208 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:00 crc kubenswrapper[4913]: E0121 07:02:00.531422 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.700020 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.707527 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.721105 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.755647 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.583373 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.595667 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.604292 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.615369 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.626820 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.634320 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.643481 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.650943 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.683939 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.697208 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.868809 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.875158 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 07:02:13 crc kubenswrapper[4913]: I0121 07:02:13.526176 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:13 crc kubenswrapper[4913]: E0121 07:02:13.526871 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:16 crc kubenswrapper[4913]: I0121 07:02:16.461549 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 07:02:16 crc kubenswrapper[4913]: I0121 07:02:16.495337 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 07:02:16 crc kubenswrapper[4913]: I0121 07:02:16.508223 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.094439 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.101573 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.115221 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.141465 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.526338 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:28 crc kubenswrapper[4913]: E0121 07:02:28.526609 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.584084 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.594258 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.597766 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.604677 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.615763 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.625161 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.632869 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.638306 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.669899 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.679988 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.793767 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.808402 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 07:02:38 crc kubenswrapper[4913]: I0121 07:02:38.895413 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/extract/0.log" Jan 21 07:02:38 crc kubenswrapper[4913]: I0121 07:02:38.904239 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/util/0.log" Jan 21 07:02:38 crc kubenswrapper[4913]: I0121 07:02:38.914564 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/pull/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.219167 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/registry-server/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.223416 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-utilities/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.230294 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-content/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.565434 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/registry-server/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.572910 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-utilities/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.581754 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-content/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.597930 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mmmzm_9850b956-f0a1-4e29-b5c2-703b0aa7b697/marketplace-operator/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.670966 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/registry-server/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.675739 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-utilities/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.684402 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-content/0.log" Jan 21 07:02:40 crc kubenswrapper[4913]: I0121 07:02:40.028231 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/registry-server/0.log" Jan 21 07:02:40 crc kubenswrapper[4913]: I0121 07:02:40.037730 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-utilities/0.log" Jan 21 07:02:40 crc kubenswrapper[4913]: I0121 07:02:40.042830 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-content/0.log" Jan 21 07:02:43 crc kubenswrapper[4913]: I0121 07:02:43.526703 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:43 crc kubenswrapper[4913]: E0121 07:02:43.527205 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:56 crc kubenswrapper[4913]: I0121 07:02:56.526388 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:56 crc kubenswrapper[4913]: E0121 07:02:56.529434 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.862837 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.868465 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.881479 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.905497 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.329255 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.340463 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.358696 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.367262 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.377341 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.385026 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.397912 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.403400 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.429150 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.439176 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.552877 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.559425 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 07:03:01 crc kubenswrapper[4913]: I0121 07:03:01.327284 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 07:03:01 crc kubenswrapper[4913]: I0121 07:03:01.346724 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 07:03:01 crc kubenswrapper[4913]: I0121 07:03:01.360200 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.406202 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/kube-multus-additional-cni-plugins/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.414278 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/egress-router-binary-copy/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.420825 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/cni-plugins/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.430475 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/bond-cni-plugin/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.435830 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/routeoverride-cni/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.442944 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni-bincopy/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.448932 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.462416 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/multus-admission-controller/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.466725 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/kube-rbac-proxy/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.519093 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/3.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.534120 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.556174 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/network-metrics-daemon/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.562359 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/kube-rbac-proxy/0.log" Jan 21 07:03:08 crc kubenswrapper[4913]: I0121 07:03:08.526440 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:08 crc kubenswrapper[4913]: E0121 07:03:08.526926 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:03:23 crc kubenswrapper[4913]: I0121 07:03:23.526931 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:23 crc kubenswrapper[4913]: E0121 07:03:23.528044 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:03:34 crc kubenswrapper[4913]: I0121 07:03:34.533468 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:34 crc kubenswrapper[4913]: E0121 07:03:34.534963 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:03:49 crc kubenswrapper[4913]: I0121 07:03:49.526282 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:49 crc kubenswrapper[4913]: E0121 07:03:49.527151 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:04 crc kubenswrapper[4913]: I0121 07:04:04.526402 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:04 crc kubenswrapper[4913]: E0121 07:04:04.527297 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:16 crc kubenswrapper[4913]: I0121 07:04:16.527127 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:16 crc kubenswrapper[4913]: E0121 07:04:16.528161 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:27 crc kubenswrapper[4913]: I0121 07:04:27.526223 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:27 crc kubenswrapper[4913]: E0121 07:04:27.527010 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:42 crc kubenswrapper[4913]: I0121 07:04:42.526297 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:42 crc kubenswrapper[4913]: E0121 07:04:42.527456 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:56 crc kubenswrapper[4913]: I0121 07:04:56.526769 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:56 crc kubenswrapper[4913]: E0121 07:04:56.527822 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:05:09 crc kubenswrapper[4913]: I0121 07:05:09.527086 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:05:11 crc kubenswrapper[4913]: I0121 07:05:11.078111 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"89ea175ca019332f032d60279f93f0918d80e130a13065c463949a31cc71e982"} Jan 21 07:07:08 crc kubenswrapper[4913]: I0121 07:07:08.935827 4913 generic.go:334] "Generic (PLEG): container finished" podID="a42cc95b-6480-4433-9ad8-112d8e53faff" containerID="7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2" exitCode=0 Jan 21 07:07:08 crc kubenswrapper[4913]: I0121 07:07:08.935906 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerDied","Data":"7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2"} Jan 21 07:07:08 crc kubenswrapper[4913]: I0121 07:07:08.937314 4913 scope.go:117] "RemoveContainer" containerID="7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2" Jan 21 07:07:09 crc kubenswrapper[4913]: I0121 07:07:09.277649 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/gather/0.log" Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.856909 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.857799 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5jnbm/must-gather-96p7d" podUID="a42cc95b-6480-4433-9ad8-112d8e53faff" containerName="copy" containerID="cri-o://89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e" gracePeriod=2 Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.862571 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.988625 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/copy/0.log" Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.989902 4913 generic.go:334] "Generic (PLEG): container finished" podID="a42cc95b-6480-4433-9ad8-112d8e53faff" containerID="89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e" exitCode=143 Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.185338 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/copy/0.log" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.186100 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.303116 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"a42cc95b-6480-4433-9ad8-112d8e53faff\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.303245 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"a42cc95b-6480-4433-9ad8-112d8e53faff\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.309132 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss" (OuterVolumeSpecName: "kube-api-access-l6fss") pod "a42cc95b-6480-4433-9ad8-112d8e53faff" (UID: "a42cc95b-6480-4433-9ad8-112d8e53faff"). InnerVolumeSpecName "kube-api-access-l6fss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.372855 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a42cc95b-6480-4433-9ad8-112d8e53faff" (UID: "a42cc95b-6480-4433-9ad8-112d8e53faff"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.404128 4913 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.404179 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") on node \"crc\" DevicePath \"\"" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.535099 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42cc95b-6480-4433-9ad8-112d8e53faff" path="/var/lib/kubelet/pods/a42cc95b-6480-4433-9ad8-112d8e53faff/volumes" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.010695 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/copy/0.log" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.011285 4913 scope.go:117] "RemoveContainer" containerID="89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.011444 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.039377 4913 scope.go:117] "RemoveContainer" containerID="7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2" Jan 21 07:07:38 crc kubenswrapper[4913]: I0121 07:07:38.319691 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:07:38 crc kubenswrapper[4913]: I0121 07:07:38.320418 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:08:08 crc kubenswrapper[4913]: I0121 07:08:08.319510 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:08:08 crc kubenswrapper[4913]: I0121 07:08:08.320129 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134075566024461 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134075567017377 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134071302016501 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134071302015451 5ustar corecore